Skip to main content
Top
Published in: Complex & Intelligent Systems 1/2024

Open Access 20-07-2023 | Original Article

MABAC framework for logarithmic bipolar fuzzy multiple attribute group decision-making for supplier selection

Authors: Chiranjibe Jana, Harish Garg, Madhumangal Pal, Biswajit Sarkar, Guiwu Wei

Published in: Complex & Intelligent Systems | Issue 1/2024

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In this article, we introduce logarithmic operations on bipolar fuzzy numbers (BFNs). We present some new operators based on these operations, namely, the logarithm bipolar fuzzy weighted averaging (L-BFWA) operator, logarithm bipolar fuzzy ordered weighted averaging (L-BFOWA) operator, and logarithm bipolar fuzzy weighted geometric (L-BFWG) operator, and logarithm bipolar fuzzy ordered weighted geometric (L-BFOWG) operator. Further, develop a multi-attribute group decision-making (MAGDM) methodology model based on logarithm bipolar fuzzy weighted averaging operator and logarithm bipolar fuzzy weighted geometric operators. To justify the proposed model’s efficiency, MABAC (the multiple attribute border approximation area comparison) methods are applied to construct MAGDM with BFNs established on proposed operators. To demonstrate the proposed approach’s materiality and efficiency, use the proposed method to solve supply chain management by considering numerical examples for supplier selection. The selection of suppliers is investigated by aggregation operators to verify the MABAC technique. The presented method is likened to some existing accumulation operators to study the feasibility and applicability of the proposed model. We concluded that the proposed model is accurate, effective, and reliable.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

MADM and MAGDM methods have applications in several sectors of modern technology that attract a concentration of experimenters. In the familiar forenamed study, the investigators have examined an assortment of MADM techniques like TOPOSIS process [1], PROMETHEE [2], EDAS system [3], VIKOR [4] problem, TODIM method [5], and so on. The MABAC system, which may include competing features in care during decision-making, [6] was the first to be introduced. The benefit of just approximate border areas (BAA) to mark the intangibility of decision-makers (DMs) and the ambiguity of the choice theory to obtain robust and reasonable comprehensive details, according to a MABAC design model.
Zhang [7, 8] was the first to propose the BFS, which was based on Zadeh’s [9] concept of fuzzy sets (FS). BFS mechanism is not just applied in logic and BF setting ideas but also used in other valuable sites, as Gul [10] suggested BF accumulation operators and used for modeling MADM systems. Further, Jana et al. [11] mentioned the Dombi approach to sum BFNs had formed the MADM method using BF Dombi operators. Wei et al. [12] motivated to apply the Hamacher approach to build the MADM technique in the same domain after they utilized them in decision-making. Jana et al. [13] presented a Dombi prioritized model for applying the MADM approach to the BF structure in this way. Using BFSS operators, Jana et al. [14] devised a unique MCDM approach. There need to be investigations of the MABAC technique using BFNs in the existing BFN literature. As a result, we developed a MADM strategy for MABAC using BFNs to fill the research gap. Based on the essential operation of BFNs, we define certain logarithm operations on BFNs in this work. Here, introduce the L-BFWA operator, L-BFOWA operator, L-BFWG operator, and L-BFOWG operator as AOs based on the motivated results [15, 16]. Then, using an MAGDM methodology, we introduced MABAC systems based on the standard MABAC approach using BF data. We also devised a MAGDM approach based on the L-BFWA and L-BFWG operators. Finally, we present a numerical illustration of the novel method with BFNs for supplier selection, followed by a classified justification of the proposed BF MABAC system with excellent current operators, such as BFWA and BFWG operators; BFDWA and BFDWA operators; and BFHWA and BFHWG operators to demonstrate the novel approach’s effectiveness and feasibility.
In the second portion, the paper’s outliving section is provided, and he goes over several of his prior writings again. Definitions, working formulas, score and accuracy definitions, and some BFA operators are all covered in the section “Preliminaries”. Some logarithmic bipolar fuzzy weighted aggregation operations are defined in the section “Logarithmic BFNs”. The original MABAC system is conferred with an action technique in the section “Traditional MABAC”. The MAGDM approach based on logarithmic BF operators and the MABAC method for MAGDM with BFNs are presented in the section “MADM model of the proposed method”. Provide an example of a supplier selection problem in the section “Numerical example” to describe the recommended approach. With some recent cases, the section “Comparative results” displays some investigation of this course. The paper with future active surveillance is found in the ninth paragraph.

Literature review

According to Attanasov [17], an intuitionistic fuzzy set (IFS) was an assertive branch of the fuzzy set (FS) [9] that exclusively included MF issues. It was shown using MF and NMF. The author [18] has extensively investigated and exploited the concepts of comfort, perspective, indeterminacy, and conflict. IFS is a more general conception of FS that depicts entities of the world more entirely from these concepts. To solve MADM systems, Xu and Yager trained the OWA operator to [19]. Last few years different advanced and intelligent techniques for decision making were proposed by several researches [2026].
Even though IFS and IVIFS have been used to crack real-world crises, in some circumstances, a counter property exists to communicate information about an object corresponding to each attribute. Zhang [7, 8] was given BFS, a new inference of FS, to solve these problems. BFS’s MF and NMF have a larger range of \([-1,1]\). BFS was then viewed as a novel technique for managing uncertainty in real-world challenges. BFS was used not only in the bipolar analytical sense. BFS issue [27], but also in additional issues such as computational psychiatry [28], medicine [2932] bipolar logic-based quantum computing, cellular combinatorics [32], decision analysis and modeling [33, 34], physics and philosophy [35], and BF network selection [36, 37]. Gul [10] recently created AOs in the BF domain and defined BFWA and BFWG operators, which he used to solve the MADM system. Later, in the MADM approach, Wei et al. [38] enforced the Hamacher approach to amass BF data. Wang et al. [39] also devised the Frank Choquet Bonferroni mean strategy for accumulating BF neutrosophic arguments to solve MCDM difficulties. Gao et al. [40] developed dual hesitant BF average and geometric operators using prioritized-wise Hamacher norms. Lu et al. [41] proposed B2TLHA and B2TLHG operators and created MADM systems based on the (B2L) bipolar 2-tuple linguistic idea. DHBFWA averaging and DHBFWG geometric operators were introduced by Xu et al. [42] in a dual BF environment, and these were used to solve MADM systems. Wei et al. [43] explored hesitant BF-ordered weighted geometric and averaging operators and used them to solve a MADM issue. The security provider selection problem is studied probabilistic linguistic TODIM approach [44], and the TODIM method for interval-valued bipolar fuzzy MAGDM approach [45]. Using IVB2TLNs perspectives, the measurement of freight fluidity alternative based on fuzzy EDAS model was proposed by Decevi et al. [46]. Fuzzy TODIM with type-2 Gaussian numbers applied to a healthcare issue introduced by Tolga et al. [47]. Tolga and Basar [48] developed fuzzy MCDM techniques; a smart system for hydroponic vertical farming is evaluated.
In the decision-making process, [49] regarded a BF rough approach. Some meta-heuristic approaches for green preservation technology was proposed by Saha et al. [50]. In the MCDM problems estimating methodology, [51] introduced the BF-soft set, and [52] introduced ELECTRE 1 systems again. Contributors have recently focused their attention on MADM issues. Peng and Yang [53] later concentrated on the extended MABAC approach in a Pythagorean fuzzy environment. In addition, [54] used an innovative MABAC approach utilizing IVIFNs. Pamucar et al. [55] modified the classic MABAC system after that. A hybrid IR-AHP-MABAC system was developed by [56]. Yu et al. [57] utilized a likelihood approach to establish the MABAC technique based on intuitionistic trapezoidal linguistic (ITLNs). The MABAC approach was first used with HFLNs by Sun et al. [58], who used the model to compute patient prioritized level selection. A novel system of MCDM approaches related to AHP and MABAC was introduced by Roy et al. [59]. In IVIFS information, Mishra et al. [60] considered the MABAC method-based MCDM methodology. [61] used q-rung orthopair fuzzy information to apply MABAC systems. Specific MADM systems are based on BFO [11, 13, 14, 38] and BF algebraic structures [51, 62, 63], etc. in the present literature. However, we have not been aware of any studies we will discuss in this work. This work aims to fill a research gap in the Logarithm bipolar fuzzy operator for MAGDM justified by the MABAC approach. However, in the MABAC technique and logarithmic bipolar fuzzy AOs method, they cannot take into account the same time. On the other hand, the proposed method uses the existing operators [15, 16] to aggregate bipolar fuzzy arguments, and no study exists on the proposed method. This model attempts to close a research gap by addressing the MABAC method [5360] for LBFMCDM using BF data. As a result, the proposed model is more efficient and advanced than other current methods [11, 13, 14, 38, 51, 62, 63]. The objectives of this paper are to:
  • In conjunction with logarithmic bipolar fuzzy aggregation operators, a connection with the MABAC technique is being developed.
  • The proposed method is used to investigate some properties.
  • A numerical example demonstrates the suggested concept in a case study.
  • Using specific known methodologies, numerically validate the superiority of the suggested study.

Preliminaries

This section recognizes some fundamental notions connected to BFSs over the universe of address X.

BFSs

Definition 3.1
[38] Let B be a BFS over fixed set X depicted as
$$\begin{aligned} L=\{\langle x,{\mu }^{+}_L(x),{\nu }^{-}_L(x)\rangle |x\in X\}, \end{aligned}$$
where \({\mu }^{+}_L\) considered PMD and \({\nu }^{-}_B(x)\) considered as NNMD with \(\mu ^{+}_L\in [0,1\) and \(\nu ^{-}_L\in [-1,0]\) of x an element to a BFS L, for each \(x\in X\). Also, \(\zeta (x)=1-\mu ^{+}_L(x)+\nu ^{-}_L(x)\) is indicating the indeterminacy degree of x for the \(x\in L\). The set \(\langle (\mu ^{+}_L,\nu ^{-}_L)\rangle \) depicted as bipolar fuzzy numbers (BFNs) or bipolar fuzzy values (BFVs).
[38] expressed operations between BFNs as follows:
Definition 3.2
[38] Let \(L=(\langle {\mu }^{+}_L(x),{\nu }^{-}_L(x)\rangle )\) and \(M=(\langle {\mu }^{+}_M(x),{\nu }^{-}_M(x)\rangle )\) be two BFVs over X, for all \(x\in X\):
(1)
\(L\wedge M=\{\langle x, \min \{{\mu }^{+}_L(x),{\mu }^{+}_M(x)\}, \max \{{\nu }^{-}_L(x),{\nu }^{-}_M\}\rangle \}\)
 
(2)
\(L\vee M=\{\langle x, \max \{{\mu }^{+}_L(x),{\mu }^{+}_M(x)\}, \min \{{\nu }^{-}_L(x),{\nu }^{-}_M\}\rangle \}\)
 
(3)
\(L\oplus M=\big (\big \langle {\mu }^{+}_L(x)+{\mu }^{+}_M(x)-{\mu }^{+}_L(x){\mu }^{+}_M(x), -|{\nu }^{-}_L(x)| |{\nu }^{-}_M(x)|\big \rangle \big )\)
 
(4)
\(L\otimes M=\big (\big \langle {\mu }^{+}_L(x){\mu }^{+}_L(x),\! {\nu }^{-}_L(x)\!+\!{\nu }^{-}_M(x)-{\nu }^{-}_L(x){\nu }^{-}_M(x)\big \rangle \big )\)
 
(5)
\(\lambda L=\big (1-(1-{\mu }^{+}_L(x))^{\lambda }, -|{\nu }_L(x)|^{\lambda }\big )\)
 
(6)
\(L^{\lambda }=\big ({\mu }_L^{\lambda }(x), -1+|1+{\nu }^{-}_L(x))|^{\lambda }\big )\).
 
Definition 3.3
[38] Let \(L=({\mu }_L,{\nu }_L)\) be a BFVs, and the score \(\Theta \) of L is mentioned as
$$\begin{aligned} \Theta (L)=\frac{1+{\mu }^{+}_L+{\nu }^{-}_L}{2}, \Theta (L)\in [0,1], \end{aligned}$$
(1)
and accuracy \(\Phi \) of L is defined as
$$\begin{aligned} \Phi (L)=\frac{{\mu }^{+}_L-{\nu }^{-}_L}{2}, \Phi (L)\in [0,1]. \end{aligned}$$
(2)
The \(\Theta \) and \(\Phi \) act as a ranking relation between BFNs L and M as follows:
Definition 3.4
[38] Let L and M be any two BFNs.
(i)
If \(\Theta (L) <\Theta (M)\), follows \(L\prec M\)
 
(ii)
If \(\Theta (L)> \Theta (M)\), follows \(L\succ M\)
 
(iii)
If \(\Theta (L)= \Theta (M)\), then
(1)
If \(\Phi (L) < \Phi (M)\), follows \(L\prec M\).
 
(2)
If \(\Phi (L) > \Phi (M)\), follows \(L\succ M\).
 
(3)
If \(\Phi (L)=\Phi (M)\), then \(L\sim M\).
 
 
Definition 3.2 applied on these rules:
Definition 3.5
[38] Let \(L=(\langle {\mu }^{+}_L, {\nu }^{-}_L\rangle )\) and \(M=(\langle {\mu }^{+}_M, {\nu }^{-}_M\rangle )\) be two BFVs over X and \(\pi ,\pi _1,\pi _2>0\), then
(1)
\(L\oplus M= M\oplus L\)
 
(2)
\(L\otimes M= M\otimes L\)
 
(3)
\(\gamma (L\oplus M)=\pi L\oplus \pi M\)
 
(4)
\((L\otimes M)^{\pi }=B^{\pi }\otimes M^{\pi }\)
 
(5)
\(\pi _1 L\oplus \pi _2 L=(\pi _1+\pi _2) L\)
 
(8)
\(L^{\pi _1}\otimes L^{\pi _2}=L^{(\pi _1+\pi _2)}\)
 
(7)
\((L^{\pi _1})^{\pi _2}=L^{\pi _1\pi _2}\).
 

Some bipolar fuzzy aggregation operators

Definition 3.6
[14] Let \(L_h=(\mu ^{+}_h,\nu ^{-}_h)\) \((h=1,2,\ldots m)\) be a set of BFNs. A BFWA operator \(\Omega ^{m}\rightarrow \Omega \) of weight vector \(\psi =(\psi _1,\psi _2,\ldots ,\psi _m)^\mathrm{{T}}\), \(\psi > 0\) and \(\sum ^{m}\nolimits _{h=1} \psi _h=1\), follows \(BFWA_{\psi }(L_1, L_2,\ldots , L_m)=\bigoplus ^{m}\nolimits _{h=1}(\psi _h L_h)\):
$$\begin{aligned} =\left( 1-\prod ^{m}\limits _{h=1}{(1-\mu ^{+}_h)}^{\psi _h},-\prod ^{n}\limits _{h=1}{\Bigg (|\nu ^{-}_h}|\Bigg )^{\psi _h}\right) . \end{aligned}$$
(3)
Definition 3.7
[14] Let \(L_h=(\mu ^{+}_h,\nu ^{-}_h)\) \((h=1,2,\ldots , m)\) be a set of BFNs. A BFWG operator \(\Omega ^{m}\rightarrow \Omega \) of weight vector \(\psi =(\psi _1,\psi _2,\ldots ,\psi _\eta )^\mathrm{{T}}\) for which \(\psi > 0\), \(\prod ^{m}\nolimits _{h=1} \psi _h=1\), follows as \(BFWG_{\psi }(L_1, L_2,\ldots , L_m)=\bigoplus ^{m}\nolimits _{h=1}(\psi _h L_h)\):
$$\begin{aligned} =\left( \prod ^{m}\limits _{h=1}{\mu ^{+}_h}^{\psi _h}, -1+\prod ^{m}\limits _{h=1}{(1+\nu ^{-}_h)}^{\psi _h}\right) . \end{aligned}$$
(4)
Definition 3.8
Let \(L_1=(\mu ^{+}_1,\nu ^{-}_1)\) and \(L_2=(\mu ^{+}_2,\nu ^{-}_2)\) be any two BFNs, and then bipolar fuzzy normalized Hamming distance (BFNHD) is defined as
$$\begin{aligned} d_\mathrm{{BFNHD}}(L_1,L_2)=\frac{1}{2}(|\mu ^{+}_1-\mu ^{+}_2|+|\nu ^{-}_1-\nu ^{-}_2|). \end{aligned}$$
(5)

Logarithmic BFNs

In this section, we established some new logarithmic operational laws on BFNs.

Logarithmic operation

Let L be a BFNs, and \(\theta > 0\) be a real number. Since \(\log _\theta {0}\) and \(\log _1{x}\) cannot be defined in real numbers, so we assume that \(L\ne 0\) where 0 is the zero BFNs, \(L\ne \langle 0,-1\rangle \) and \(\theta \ne 1\) to this study.
Definition 4.1
Let X be a non-empty set and \(L=\{{\mu }^{+}_L,{\nu }^{-}_L\}\) be a BFNs, then we define logarithm operation of BFNs L as follows:
$$\begin{aligned} \log _\theta {L}= & {} \Bigg \{\Bigg \langle x, 1-\log _\theta {{{\mu }^{+}_L(x)}},\nonumber \\{} & {} -|\log _\theta {{(1+{\nu }^{-}_L(x))}}|\Bigg \rangle \mid x\in X\Bigg \}, \end{aligned}$$
(6)
where \(0<\theta \le \min \Bigg \{{\mu }^{+}_L, 1+{\nu }^{-}_L\Bigg \}\le 1\), \(\theta \ne 1\). It is clearly seen that the \(\log _\theta {L}\) is also a BFNs. It is from the definition of BFNs, for all \(x\in X\), the functions \({\mu }^{+}_L, {\nu }^{-}_L\) satisfying \({\mu }^{+}_L:X\rightarrow [0,1]\), \({\nu }^{-}_L:X\rightarrow [-1,0]\) and \(-1\le {\mu }^{+}_L+{\nu }^{-}_L\le 1\).
The PMD \(1-\log _\theta {{{\mu }^{+}_L}}:X\rightarrow [0,1]\), for all \(x\in X\rightarrow 1-\log _\theta {{{\mu }^{+}_L(x)}}\in [0,1]\), and NNMD \(-\log _\theta {{\Bigg (1+{\nu }^{-}_L\Bigg )}}: X\rightarrow [-1,0]\), for all \(x\in X\rightarrow -\log _\theta {{\Bigg (1+{\nu }^{-}_L(x)\Bigg )}}\in [-1,0]\). Therefore
$$\begin{aligned} \log _\theta {B}= & {} \Bigg \{\Bigg \langle x, 1-\log _\theta {{{\mu }^{+}_B(x)}},\\ {}{} & {} -\log _\theta {{\Bigg (1+{\nu }^{-}_B(x)\Bigg )}}\Bigg \rangle \mid x \in X\Bigg \}, \end{aligned}$$
where \(0<\theta \le \min \Bigg \{{\mu }^{+}_L, 1+{\nu }^{-}_L\Bigg \}\le 1\), \(\theta \ne 1\), is a BFNs.
Definition 4.2
Let \(L=\{\langle x,{\mu }^{+}_L(x),{\nu }^{-}_L(x)\rangle |x\in X\}\) be a BFNs
$$\begin{aligned} \log _\theta {L}=\left\{ \begin{array}{ll} \bigg \langle 1-\log _\theta {{{\mu }^{+}_L}},-\log _\theta {{\big (1+{\nu }^{-}_L\big )}}\bigg \rangle ;\\ \Bigg \langle 1-\log _{\frac{1}{\theta }}{{{\mu }^{+}_L}},-\log _{\frac{1}{\theta }}{{(1+{\nu }^{-}_L)}}\Bigg \rangle ,\\ \end{array}\right. \end{aligned}$$
where \(0<\theta \le \min \Bigg \{{\mu }^{+}_L, 1+{\nu }^{-}_L\Bigg \}\le 1\) and
\(0<{\frac{1}{\theta }}\le \min \Bigg \{{\mu }^{+}_L, 1+{\nu }^{-}_L\Bigg \}\le 1, \theta \ne 1\) where the function \(\log _\theta {L}\) is called a logarithm operator, and the value of \(\log _\theta {L}\) is called logarithmic BFNs (L-BFN), and taking \(\log _\theta {0}=0\), \(\theta > 0\), and \(\theta \ne 1\).
Theorem 4.1
For a BFNs B, the value of \(\log _\theta {B}\) is a BFN.
Proof
Let \(B=\{\langle {\mu }^{+}_B,{\nu }^{-}_B\rangle \}\) be a BFNs, where \(0\le {\mu }^{+}_B\le 1\) and \(-1\le {\nu }^{-}_B\le 0\) and \(-1\le {\mu }^{+}_B+{\nu }^{-}_B\le 1\). Then, two cases appear as follows:
Case 1:
When \(0<\theta \le \min \{{\mu }^{+}_B,1+{\nu }^{-}_B\}< 1, \theta \ne 1\). \(0\le \log _\theta {{\mu }^{+}_B},\log _\theta {(1+{\nu }^{-}_B)}\le 1\). Also, \(0\le 1-\log _\theta {{\mu }^{+}_B}\le 1\) and \(-1\le -\log _\theta {(1+{\nu }^{-}_B)}\le 0\). Then, \(-1\le 1-\log _\theta {{\mu }^{+}_B}-\log _\theta {(1+{\nu }^{-}_B)}\le 1\). Therefore, \(\log _\theta {B}\) is a BFN.
 
Case 2:
When \( \theta > 1\) and \(0<\frac{1}{\theta }<1\) and \(\frac{1}{\theta }\le \min \{\{{\mu }^{+}_B,1+{\nu }^{-}_B\}\), it can be easily proved that \(\log _\theta {B}\) is a BFN. \(\square \)
 
Example 4.1
Let \(L=\langle 0.7,-0.4\rangle \) be a BFN, and \(\theta =0.3\), then \(\log _\theta {L}=\langle 1-\log _\theta {{\mu }^{+}_L}, -\log _\theta {(1+{\nu }^{-}_L)}\rangle \) \(=\langle 1-\log _0.3{(0.7)}, -\log _0.3{(1-0.4)}\rangle =\langle 0.7038,-0.4243\rangle \). If \(\theta =3\), then \(\log _{\frac{1}{\theta }}{L}=\log _{\frac{1}{3}}\langle 0.7,-0.4\rangle \) \(=1-\log _{\frac{1}{3}}{(0.7)}, -\log _{\frac{1}{3}}{(1-0.4)}\rangle \) \(=\langle 0.6753,-0.4650\rangle \).
Now, we discuss some basic properties of L-BFN \(\log _\theta {B}\) based on (BOL) when \(\theta \in (0,1)\) and \(\theta > 1\) as follows:
Theorem 4.2
Let \(L=\{\langle {\mu }^{+}_L,{\nu }^{-}_L\rangle \}\) be a BFNs, where \(0<\theta \le \min \Bigg \{{\mu }^{+}_L, 1+{\nu }^{-}_L\Bigg \}\le 1\), \(\theta \ne 1\), then
(1)
\(\theta ^{\log _\theta {L}}=L\)
 
(2)
\(\log _\theta {\theta ^{L}}=L\).
 
Proof
The proof of Theorem 4.2 is trivial.
(1)
According to the Definitions 3.2 and 3.8, we have \(\theta ^{\log _\theta {L}}=\Bigg \langle \theta ^{1-(1-\log _\theta {{\mu }^{+}_L)}}, 1+\theta ^{-\log _\theta {(1+{\nu }^{-}_L)}}\rangle \) \(=\langle \theta ^{\log _\theta {{\mu }^{+}_L}}, 1-(1-\nu ^{-}_L) \rangle \) \(=\langle \mu ^{+}_L,\nu ^{-}_L\rangle =L\).
 
(2)
By the Definition 3.8, we have \(\log _\theta {\theta ^{L}}=\langle \log _\theta (\theta ^{1-\mu ^{+}_L}, 1-\theta ^{\nu ^{-}_L})\rangle \) \(=\langle 1-\log _\theta {\theta ^{1-\mu ^{+}_L}}, \log _\theta {1-(1-\theta ^{\nu ^{-}_L})}\rangle \) \(=\langle \mu ^{+}_L,\nu ^{-}_L\rangle =L\). \(\square \)
 
Theorem 4.3
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \}\) \((h=1,2)\) BFNs be two, where \(0<\theta \le \min \limits _{h} \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1\), \(\theta \ne 1\). Then
(1)
\(\log _\theta {L_1}\bigoplus \log _\theta {L_2} =\log _\theta {L_2}\bigoplus \log _\theta {L_1}\)
 
(2)
\(\log _\theta {L_1}\bigotimes \log _\theta {L_2} =\log _\theta {L_2}\bigotimes \log _\theta {L_1}\).
 
Proof
The proof of the Theorem 4.4 is trivial. \(\square \)
Theorem 4.4
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \} (h=1,2,3)\) be three BFNs, where \(0<\theta \le \min \nolimits _{h} \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1\), \(\theta \ne 1\). Then
(1)
\((\log _\theta {L_1}\bigoplus \log _\theta {L_2})\bigoplus \log _\theta {L_3}= \log _\theta {L_1}\bigoplus (\log _\theta {L_2}\bigoplus \log _\theta {L_3})\)
 
(2)
\((\log _\theta {L_1}\bigotimes \log _\theta {L_2})\bigotimes \log _\theta {L_3} =\log _\theta {L_1}\bigotimes (\log _\theta {L_2}\bigotimes \log _\theta {L_3})\).
 
Proof
The proof of the Theorem 4.4 is trivial. \(\square \)
Theorem 4.5
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \} (h=1,2)\) be two BFNs, where \(0<\theta \le \min \nolimits _{h} \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1\), \(\theta \ne 1\), and \(\lambda _1,\lambda _2,\lambda _3>0\) be three real numbers. Then
(1)
\(\lambda _1(\log _\theta {L_1}\bigoplus \log _\theta {L_2})=\lambda _1\log _\theta {L_1}\bigoplus \lambda _1\log _\theta {L_2}\)
 
(2)
\((\log _\theta {L_1}\bigotimes \log _\theta {L_2})^{\lambda _1}=(\log _\theta {L_1})^{\lambda _1}\bigotimes (\log _\theta {L_2})^{\lambda _1}\)
 
(3)
\(\lambda _1(\log _\theta {L_1}\bigoplus \lambda _2\log _\theta {L_1})=(\lambda _1+\lambda _2)\log _\theta {L_1}\)
 
(4)
\((\log _\theta {L_1})^{\lambda _1}\bigoplus (\log _\theta {L_1})^{\lambda _2} =(\log _\theta {L_1})^{(\lambda _1+\lambda _2)}\)
 
(5)
\(\Bigg ((\log _\theta {L_1})^{\lambda _1}\Bigg )^{\lambda _2} =(\log _\theta {L_1})^{\lambda _1\lambda _2}\).
 
Proof
The proof of the Theorem 4.5 is left. Here, we proved only (1) and (2). The proofs of others are similar.
Let \(L_1,L_2\) be two BFNs, and by Definition 3.8, we get \(\log _\theta {L_1}=\langle 1-\log _\theta {{\mu }^{+}_1}, -\log _\theta {(1+{\nu }^{-}_1)}\rangle \) \(\log _\theta {L_2}=\langle 1-\log _\theta {{\mu }^{+}_2}, -\log _\theta {(1+{\nu }^{-}_2)}\rangle \).
Then, by operation law between two BFNs, we have \(\log _\theta {L_1}\bigoplus \log _\theta {L_2}=\langle 1-(\log _\theta {{\mu }^{+}_1})(\log _\theta {{\mu }^{+}_2}), -|(\log _\theta {(1+{\nu }^{-}_1)})||(\log _\theta {(1+{\nu }^{-}_2)})|\rangle \) and \(\log _\theta {L_1}\bigotimes \log _\theta {L_2}=\langle (1-\log _\theta {{\mu }^{+}_1})(1-\log _\theta {{\mu }^{+}_2}), 1-(1-(\log _\theta {(1+{\nu }^{-}_1)}))(1-(\log _\theta {(1+{\nu }^{-}_1)}))\rangle \).
(1) For any real number \(\lambda >0\), we have \(\lambda (\log _\theta {L_1}\bigoplus \log _\theta {L_2})=\langle 1-(\log _\theta {{\mu }^{+}_1}\log _\theta {{\mu }^{+}_2})^{\lambda }, -(|\log _\theta {(1+{\nu }^{-}_1)}||\log _\theta {(1+{\nu }^{-}_2)}|)^{\lambda }\rangle \) \(=\langle 1-(\log _\theta {{\mu }^{+}_1})^{\lambda }(\log _\theta {{\mu }^{+}_2})^{\lambda }, -(|\log _\theta {(1+{\nu }^{-}_1)})^{\lambda }|)(|\log _\theta {(1+{\nu }^{-}_2)})^{\lambda }|)\rangle \) \(=\langle 1-((\log _\theta {{\mu }^{+}_1})^{\lambda }, -(|\log _\theta {(1+{\nu }^{-}_1)})^{\lambda }|)\rangle \bigoplus \langle 1-((\log _\theta {{\mu }^{+}_2})^{\lambda }, -(|\log _\theta {(1+{\nu }^{-}_2)})^{\lambda }|)\rangle \) \(=\lambda \log _\theta {L_1}\bigoplus \lambda \log _\theta {L_2}\).
(2) For any real number, \(\lambda >0\), we have
$$\begin{aligned}{} & {} (\log _\theta {L_1}\bigotimes \log _\theta {L_2})^{\lambda _1}\\{} & {} \quad =\langle ((1-\log _\theta {{\mu }^{+}_1})(1-\log _\theta {{\mu }^{+}_2}))^{\lambda },\\{} & {} \quad 1-((1-\log _\theta {(1+{\nu }^{-}_1)})(1-\log _\theta {(1+{\nu }^{-}_2)}))^{\lambda }\rangle \\{} & {} \quad =\langle (1-\log _\theta {{\mu }^{+}_1})^{\lambda }(1-\log _\theta {{\mu }^{+}_2})^{\lambda },\\{} & {} \quad 1-(1-\log _\theta {(1+{\nu }^{-}_1)})^{\lambda }(1-\log _\theta {(1+{\nu }^{-}_2)})^{\lambda }\rangle \\{} & {} \quad =\langle (1-\log _\theta {{\mu }^{+}_1})^{\lambda }, 1-(1-\log _\theta {(1+{\nu }^{-}_1)})^{\lambda }\rangle \\{} & {} \qquad \bigotimes \langle (1-\log _\theta {{\mu }^{+}_2})^{\lambda }, 1-(1-\log _\theta {(1+{\nu }^{-}_2)})^{\lambda }\\{} & {} \quad =(\log _\theta {L_1})^{\lambda }\bigotimes (\log _\theta {L_2})^{\lambda }. \end{aligned}$$
\(\square \)
Here, we considered some weighted LBF operators.
Definition 4.3
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \}\) \((h=1,2,\ldots ,m)\) be a set of BFNs, where \(0<\theta \le \min \nolimits _{h} \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1\), \(\theta \ne 1\), and let L-BFWA\(\Omega ^{m}\rightarrow \Omega \). If
$$\begin{aligned} \mathrm{{L-BFWA}}(L_1,L_2,\ldots ,L_m)=\bigoplus ^{m}\limits _{h=1}\psi _h\log _{\theta _h}{L_h}, \end{aligned}$$
(7)
where L-BFWA, the weight vector of \(\log _{\theta _h}{L_h}\), is \(\psi =(\psi _1,\psi _2,\ldots ,\psi _m)^\mathrm{{T}}\), and \(\psi _h>0\), \(\sum ^{m}\nolimits _{h=1}\psi _h=1\).
Theorem 4.6
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \} (h=1,2,\ldots ,m)\) be a set of BFNs, where \(0<\theta \le \min \nolimits _{h} \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1\), \(\theta \ne 1\). Then, total values of BFNs \(L_h\) based on L-BFWA operators are also a BFN. Further, if
$$\begin{aligned}{} & {} \mathrm{{L-BFWA}}\log _\theta {L}(L_1,L_2,\ldots ,L_m)\nonumber \\{} & {} \quad =\left\{ \begin{array}{ll} \bigg \langle 1-\prod ^{n}\limits _{h=1}\log _{\theta _h}{{{\mu }^{+}_h}},-\prod ^{m}\limits _{h=1} |\log _{\theta _h}{{\big (1+{\nu }^{-}_h\big )}}|\bigg \rangle ,\\ \Bigg \langle 1-\prod ^{m}\limits _{h=1}\log _{\frac{1}{\theta _h}}{{{\mu }^{+}_h}},-\prod ^{m}\limits _{h=1} |\log _{\frac{1}{\theta }}{{(1+{\nu }^{-}_h)}}|\Bigg \rangle ,\\ \end{array}\right. \end{aligned}$$
(8)
where \(0<\theta _h\le \min \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1, \theta _h\ne 1\) and \(0<{\frac{1}{\theta _h}}\le \min \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1, \theta _h\ne 1\).
Proof
The proof of Eq. (8) can be done by mathematical induction method on n for \(0<\theta \le \min \nolimits _{h} \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1\), \(\theta \ne 1\). Since for each h, \(L_h\) is a BFNs which implies that \(\mu ^{+}_h,\nu ^{-}_h\in [-1,1]\). Then, by Definition 3.8, for \(m=2\), we get
$$\begin{aligned} \mathrm{{L-BFWA}}(L_1,L_2)= & {} \psi _1\Bigg (\log _{\theta _1}{L_1}\Bigg )\bigoplus \psi _2\Bigg (\log _{\theta _1}{L_2}\Bigg ) \\= & {} \langle 1-(\log _{\theta _1}{\mu ^{+}_1})^{\psi _1},\\{} & {} -(|\log _{\theta _1}{(1+{\nu }^{-}_1)|})^{\psi _1}\rangle \\{} & {} \bigoplus \langle 1-(\log _{\theta _1}{\mu ^{+}_2})^{\psi _2},\\{} & {} -(|\log _{\theta _2}{(1+{\nu }^{-}_2)|})^{\psi _2}\rangle \\= & {} \Bigg \langle 1-\prod ^{2}\limits _{h=1}(\log _{\theta _h}{\mu ^{+}_h})^{\psi _h},\\{} & {} -\prod ^{2}\limits _{h=1}(|\log _{\theta _h}{(1+{\nu }^{-}_h)|})^{\psi _h}\Bigg \rangle . \end{aligned}$$
Thus, Eq. (8) holds for \(m=2\). Let us assume that Eq. (8) holds for \(h=k\). Then
$$\begin{aligned}{} & {} \mathrm{{L-BFWA}}(L_1,L_2,\ldots ,L_k) =\Bigg \langle 1-\prod ^{k}\limits _{h=1}(\log _{\theta _h}{\mu ^{+}_h})^{\psi _h},\nonumber \\{} & {} \quad -\prod ^{k}\limits _{h=1}(|\log _{\theta _h}{(1+{\nu }^{-}_h)|})^{\psi _h}\Bigg \rangle . \end{aligned}$$
(9)
Now, for \(m=k+1\), we have \(\mathrm{{L-BFWA}}(L_1,L_2,\ldots ,L_{k+1}) =\mathrm{{L-BFWA}}(L_1,L_2,\ldots ,L_k)\bigoplus \psi _{k+1}\Bigg (\log _{\theta _{k+1}}{L_{k+1}}\Bigg ) =\Bigg \langle 1-\prod ^{k}\nolimits _{h=1}(\log _{\theta _h}{\mu ^{+}_h})^{\psi _h}, -\prod ^{k}\nolimits _{h=1}(|\log _{\theta _h}(1+{\nu }^{-}_h)|)^{\psi _h}\Bigg \rangle \bigoplus \Bigg \langle 1-(\log _{\theta _{k+1}}{\mu ^{+}_{k+1}})^{\psi _{k+1}}, -(|\log _{\theta _{k+1}}{(1+{\nu }^{-}_{k+1})|})^{\psi _{k+1}}\Bigg \rangle =\Bigg \langle 1-\prod ^{k+1}\nolimits _{h=1}(\log _{\theta _h}{\mu ^{+}_h})^{\psi _h}, -\prod ^{k+1}\nolimits _{h=1}(|\log _{\theta _h}{(1+{\nu }^{-}_h)|})^{\psi _h}\Bigg \rangle \). Thus the Eq. (8) holds for \(n=k+1\).
Hence, Eq. (8) is true for all natural numbers by the principle of mathematical induction. \(\square \)
Example 4.2
Let \(L_1=(0.6, -0.3)\), \(L_2=(0.7,-0.5)\) and \(L_3=(0.4,-0.6)\) be three BFNs and \(\psi =(0.5,0.2,0.3)\) be the weight vector of them. Let \(\theta _1=\theta _2=\theta _3=0.1\). Then
$$\begin{aligned} \mathrm{{L-BFWA}}(L_1,L_2,L_3)= & {} \Bigg \langle 1-\prod ^{3}\limits _{j=1}(\log _{\theta _j}{\mu ^{+}_j})^{\psi _j},\\{} & {} -\prod ^{3}\limits _{h=1}(|\log _{\theta _h}{(1+{\nu }^{-}_h)|})^{\psi _h}\Bigg \rangle \\= & {} \Bigg \langle 1-(\log _{0.1}{0.6})^{0.5}\\{} & {} \times (\log _{0.1}{0.7})^{0.2}\times (\log _{0.1}{0.4})^{0.3}, \\{} & {} -(|\log _{0.1}{(1-0.3)|})^{0.5}\\{} & {} \times (|\log _{0.1}{(1-0.5)|})^{0.2}\\{} & {} \times (|\log _{0.1}{(1-0.6)|})^{0.3}\Bigg \rangle \\= & {} \langle 0.7539,-0.2348 \rangle . \end{aligned}$$
Now, we prove some properties of L-BFWA operator for \(\theta _1=\theta _2=\theta _3=\theta \) and \(0<\theta \le \min \nolimits _{h} \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1\), \(\theta \ne 1\) and \(\psi _h>0\) be the weight vector of BFNs \(L_h\), such that \(\sum ^{m}\nolimits _{h=1}\psi _h=1\).
Theorem 4.7
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \}\) \((h=1,2,\ldots ,m)\) be a group of BFNs, then \(\mathrm{{L-BFWA}}(L_1,L_2,\ldots ,L_m)=\log _{\theta }{L}\).
Proof
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \}\) \((h=1,2,\ldots ,m)\) be a BFNs such that \(L_h=L\) for all h. Then, by Theorem 4.6, we have
$$\begin{aligned}{} & {} \mathrm{{L-BFWA}}(L_1,L_2,\ldots ,L_m)\\{} & {} \quad =\Bigg \langle 1-\prod ^{m}\limits _{h=1}(\log _{\theta }{\mu ^{+}_h})^{\psi _h}, -\prod ^{m}\limits _{h=1}(|\log _{\theta }{(1+{\nu }^{-}_h)|})^{\psi _h}\Bigg \rangle .\\{} & {} \quad =\Bigg \langle 1-\prod ^{m}\limits _{h=1}(\log _{\theta }{\mu ^{+}})^{\psi _h}, -\prod ^{m}\limits _{h=1}(|\log _{\theta }{(1+{\nu }^{-})|})^{\psi _h}\Bigg \rangle \\{} & {} \quad =\Bigg \langle 1-(\log _{\theta }{\mu ^{+}})^{\sum ^{m}\limits _{h=1}\psi _h}, -(|\log _{\theta }{(1+{\nu }^{-})|})^{\sum ^{m}\limits _{h=1}\psi _h}\Bigg \rangle \\{} & {} \quad =\Bigg \langle 1-(\log _{\theta }{\mu ^{+}}), -(\log _{\theta }{(1+{\nu }^{-})})\Bigg \rangle =\log _{\theta }{L}. \end{aligned}$$
\(\square \)
Theorem 4.8
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \} (h=1,2,\ldots ,m)\) be a group of BFNs with \(L^{-}=\langle \min \nolimits _{h}\{\mu ^{+}_h\},\max \nolimits _{h}\{\nu ^{-}_h\}\) and \(L^{+}=\langle \max \nolimits _{h}\{\mu ^{+}_h\},\min \nolimits _{h}\{\nu ^{-}_h\}\), then
$$\begin{aligned} \log _{\theta }{L^{-}}\le \mathrm{{L-BFWA}}(L_1,L_2\ldots ,L_m)\le \log _{\theta }{L^{+}}. \end{aligned}$$
Proof
Since for any h, \(\min \nolimits _{h}\le \{\mu ^{+}_h\}\le \mu ^{+}_h\le \max \nolimits _{h}\{\mu ^{+}_h\}\) and \(\min \nolimits _{j}\le \{\nu ^{-}_j\}\le \nu ^{-}_j\le \max \nolimits _{j}\{\nu ^{-}_j\}\), which implies that \(B^{-}_j\le B_j\le B^{+}_j\). Then, assume that \(\mathrm{{L-BFWA}}(L_1,L_2\ldots ,L_m)=\log _{\theta }{L}=\langle \mu ^{+}_L,\nu ^{-}_L\rangle \), \(\log _{\theta }{L^{-}}=\langle {\mu ^{+}}_{L^{-}},{\nu ^{-}}_{L^{-}}\rangle \) and \(\log _{\theta }{L^{+}}=\langle {\mu ^{+}}_{L^{+}},{\nu ^{-}}_{L^{+}}\rangle \). Then, based on the monotonicity of logarithmic function, we have
$$\begin{aligned} \mu ^{+}_L= & {} 1-\prod ^{m}\limits _{h=1}\Bigg (\log _{\theta }{{\mu ^{+}}_h}\Bigg )^{\psi _h}\ge \prod ^{m}\limits _{h=1}\Bigg (\log _{\theta }{\min \limits _{h}\{\mu ^{+}_h\}}\Bigg )^{\psi _h}\\= & {} 1-\log _{\theta }{(\min \{\nu ^{+}_h\})}=\mu ^{+}_{L^{-}}.\\ \nu ^{+}_L= & {} -\prod ^{m}\limits _{h=1}\Bigg (|(\log _{\theta }{{(1+\nu ^{-}_h)}}\Bigg )^{\psi _h}\\\le & {} -\prod ^{m}\limits _{h=1}\Bigg (|\log _{\theta }{(\min \limits _{h}\{(1+\nu ^{-}_h)\}}|\Bigg )^{\psi _h}\\= & {} -\bigg |\log _{\theta }{\bigg (\min \Bigg \{(1+\nu ^{-}_h\bigg )\Bigg \})}\bigg |=\nu ^{-}_{L^{+}}. \end{aligned}$$
Also, we have
$$\begin{aligned} \mu ^{+}_L= & {} 1-\prod ^{m}\limits _{j=1}\Bigg (\log _{\theta }{{\mu ^{+}}_h}\Bigg )^{\psi _h}\\\le & {} \prod ^{m}\limits _{h=1}\Bigg (\log _{\theta }{\max \limits _{h}\{\mu ^{+}_h\}}\Bigg )^{\psi _h}\\= & {} 1-\log _{\theta }{(\min \{\nu ^{+}_h\})}=\mu ^{+}_{L^{+}}.\\ \nu ^{+}_L= & {} -\prod ^{m}\limits _{h=1}\Bigg (|(\log _{\theta }{{(1+\nu ^{-}_h)}}\Bigg )^{\psi _h} \\{} & {} -\prod ^{m}\limits _{h=1}\Bigg (|\log _{\theta }{(\max \limits _{h}\{(1+\nu ^{-}_h)\}}|\Bigg )^{\psi _h}\\= & {} -\bigg |\log _{\theta }{\bigg (\min \Bigg \{(1+\nu ^{-}_h)\Bigg \}\bigg )}\bigg |=\nu ^{-}_{L^{-}}. \end{aligned}$$
Based on the score function, we get \(\Delta \Bigg (\log _\theta {L}\Bigg )=\frac{1+\mu ^{+}_L+\nu ^{-}_L}{2}\le \frac{1+\mu ^{+}_{L^{+}}+\nu ^{-}_{L^{-}}}{2}=\Delta \Bigg (\log _\theta {L^{+}}\Bigg )\) and \(\Delta \Bigg (\log _\theta {L}\Bigg )=\frac{1+\mu ^{+}_L+\nu ^{-}_L}{2}\ge \frac{1+\mu ^{+}_{L^{-}}+\nu ^{-}_{L^{+}}}{2}=\Delta \Bigg (\log _\theta {L^{-}}\Bigg )\). Hence, \(\Delta \Bigg (\log _\theta {L^{-}}\Bigg )\le \Delta \Bigg (\log _\theta {L}\Bigg )\le \Delta \Bigg (\log _\theta {L^{+}}\Bigg )\). Now, we discuss the following cases:
Case 1:
If \(\Delta \Bigg (\log _\theta {L^{-}}\Bigg )< \Delta \Bigg (\log _\theta {L}\Bigg )< \Delta \Bigg (\log _\theta {L^{+}}\Bigg )\). The results holds.
 
Case 2:
If \(\Delta \Bigg (\log _\theta {L^{+}}\Bigg )=\Delta \Bigg (\log _\theta {L}\Bigg )\), then \(\mu ^{+}_L+\nu ^{-}_L=\mu ^{+}_{L^{+}}+\nu ^{-}_{L^{-}}\) which implies that \(\mu ^{+}_L=\mu ^{+}_{L^{+}}, \nu ^{-}_L=\nu ^{-}_{L^{-}}\), and hence, \(\nabla \Bigg (\log _\theta {L^{+}}\Bigg )=\nabla \Bigg (\log _\theta {L}\Bigg )\).
 
Case 3:
If \(\Delta \Bigg (\log _\theta {L^{-}}\Bigg )=\Delta \Bigg (\log _\theta {L}\Bigg )\), then \(\mu ^{+}_L+\nu ^{-}_L=\mu ^{+}_{L^{-}}+\nu ^{-}_{L^{+}}\) which implies that \(\mu ^{+}_L=\mu ^{+}_{L^{-}}, \nu ^{-}_L=\nu ^{-}_{L^{+}}\), and hence, \(\nabla \Bigg (\log _\theta {L^{-}}\Bigg )=\nabla \Bigg (\log _\theta {L}\Bigg )\).
 
Therefore, combining all the cases, we get
$$\begin{aligned} \log _{\theta }{L^{-}}\le \mathrm{{L-BFWA}}(L_1,L_2\ldots ,L_m) \le \log _{\theta }{L^{+}}. \end{aligned}$$
\(\square \)
Theorem 4.9
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \}\) and \(L'_h=\{\langle {\mu '}^{+}_h,{\nu '}^{-}_h\rangle \}\) \((h=1,2,\ldots ,m)\) be two groups of BFNs. If \(\mu ^{+}_h\le {\mu '}^{+}_h\) and \(\nu ^{+}_h\ge {\nu '}^{+}_h\), then
$$\begin{aligned} \mathrm{{L-BFWA}}(L_1,L_2\ldots ,L_m)\le \mathrm{{L-BFWA}}(L'_1,L'_2\ldots ,L'_m). \end{aligned}$$
Proof
The proof the theorem is same as above. \(\square \)
Definition 4.4
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \}\) \((h=1,2,\ldots ,m)\) be a set of BFNs. and let \(\mathrm{{L-BFOWA}}\Omega ^{m}\rightarrow \Omega \) L-BFOWA operator of dimension m the weight vector of \(\log _{\theta _h}{L_h}\) is \(\psi =(\psi _1,\psi _2,\ldots ,\psi _m)^\mathrm{{T}}\) with \(\psi _h>0\), \(\sum ^{m}\nolimits _{h=1}\psi _h=1\)
$$\begin{aligned} \mathrm{{L-BFOWA}}(L_1,L_2,\ldots ,L_m)=\bigoplus ^{m}\limits _{h=1} \psi _h\log _{\theta _{\sigma (h)}}{L_{\sigma (h)}}\nonumber \\ \end{aligned}$$
(10)
where \(0<\theta \le \min \nolimits _{h} \Bigg \{{\mu }^{+}_{\sigma (h)}, 1+{\nu }^{-}_{\sigma (h)}\Bigg \}\le 1\), \(\theta \ne 1\) and \(\sigma \) is a permutation of \((h=1,2,\ldots ,m)\) such that \(L_{\sigma (h-1)}\le L_{\sigma (h)}\) for \(h=1,2,\ldots , m\).
Theorem 4.10
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \}\) \((h=1,2,\ldots ,m)\) be a group of BFNs, where \(0<\theta \le \min \nolimits _{h} \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1\), \(\theta \ne 1\). Then, aggregated values of BFNs \(L_h\) based on L-BFOWA operators are also a BFN. Further, if
$$\begin{aligned}{} & {} \mathrm{{L-BFOWA}}\log _\theta {L}(L_1,L_2,\ldots ,L_m)\nonumber \\{} & {} =\left\{ \begin{array}{ll} \bigg \langle 1\!-\!\prod ^{n}\limits _{h=1}\Bigg (\log _{\theta _{\sigma (h)}} {{{\mu }^{+}_{\sigma (h)}}}\Bigg )^{^{\psi _h}}, \!-\!\prod ^{m}\limits _{h=1}\Bigg (|\log _{\theta _{\sigma (h)}} {{\big (1\!+\!{\nu }^{-}_{\sigma (h)}\big )}}|\Bigg )^{^{\psi _h}}\bigg \rangle ;\\ \Bigg \langle 1\!-\!\prod ^{m}\limits _{h=1}\Bigg (\log _{\frac{1}{\theta _{\sigma (h)}} {{{\mu }^{+}_{\sigma (h)}}}\Bigg )^{\psi _h}}, \!-\!\prod ^{m}\limits _{h=1}\Bigg (|\log _{\frac{1}{\theta _{\sigma (h)}}} {{(1+{\nu }^{-}_{\sigma (h)})}}|\Bigg )^{^{\psi _h}}\Bigg \rangle ;\\ \end{array}\right. \nonumber \\ \end{aligned}$$
(11)
where \(0<\theta _{\sigma (h)}\le \min \Bigg \{{\mu }^{+}_{\sigma (h)}, 1+{\nu }^{-}_{\sigma (h)}\Bigg \}\le 1, \theta _{\sigma (h)}\ne 1\) and \(0<{\frac{1}{\theta _{\sigma (h)}}}\le \min \Bigg \{{\mu }^{+}_{\sigma (h)}, 1+{\nu }^{-}_{\sigma (h)}\Bigg \}\le 1, \theta _h\ne 1\).
Proof
The proof of the Theorem 4.10 is same as Theorem 4.6. \(\square \)
Definition 4.5
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \}\) \((h=1,2,\ldots ,m)\) be a set of BFNs, where \(0<\theta \le \min \nolimits _{h} \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1\), \(\theta \ne 1\), and L-BFWG operator implies \(\mathrm{{L-BFWG}}\Omega ^{m}\rightarrow \Omega \) of dimension m. If
$$\begin{aligned} \mathrm{{L-BFWG}}(L_1,L_2,\ldots ,L_m)=\bigotimes ^{m}\limits _{h=1} \Bigg (\log _{\theta _h}{L_h}\Bigg )^{\psi _h}, \end{aligned}$$
(12)
where weight vector of \(\log _{\theta _j}{L_h}\) is \(\psi =(\psi _1,\psi _2,\ldots ,\psi _m)^\mathrm{{T}}\), such that \(\psi _h>0\), \(\sum ^{m}\nolimits _{h=1}\psi _h=1\).
Theorem 4.11
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \}\) \((h=1,2,\ldots ,m)\) be a set of BFNs. Then, aggregated values of BFNs \(L_h\) based on L-BFWG operators are also a BFN. Further, if
$$\begin{aligned}{} & {} \mathrm{{L-BFWG}}\log _\theta {L}(L_1,L_2,\ldots ,L_m)\nonumber \\{} & {} =\left\{ \begin{array}{ll} \bigg \langle \prod ^{m}\limits _{h=1}\Bigg (1-\log _{\theta _h}{{{\mu }^{+}_h}}\Bigg ) ^{^{\psi _h}},-1+\prod ^{m}\limits _{h=1} \Bigg (1-|\log _{\theta _h}{{\Bigg (1+{\nu }^{-}_h\Bigg )}}\Bigg )^{\psi _h}|\bigg \rangle ,\\ \Bigg \langle \prod ^{m}\limits _{h=1}\Bigg (1-\log _{\frac{1}{\theta _h}}{{{\mu }^{+}_h}}\Bigg ) ^{^{\psi _h}},-1+\prod ^{m}\limits _{h=1} \Bigg (1-|\log _{\frac{1}{\theta }}{{(1+{\nu }^{-}_h)}}|\Bigg )^{^{\psi _h}}\Bigg \rangle , \\ \end{array}\right. \nonumber \\ \end{aligned}$$
(13)
where \(0<\theta _h\le \min \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1, \theta _h\ne 1\) and \(0<{\frac{1}{\theta _h}}\le \min \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1, \theta _h\ne 1\).
Example 4.3
Let \(L_1=(0.6,-0.3)\), \(L_2=(0.7,-0.5)\) and \(L_3=(0.4,-0.6)\) be three BFNs and \(\psi =(0.5,0.2,0.3)\) be the weight vector of them. Let \(\theta _1=\theta _2=\theta _3=0.1\). Then
$$\begin{aligned}{} & {} \mathrm{{L-BFWG}}(L_1,L_2,L_3)\\{} & {} \quad =\Bigg \langle \prod ^{3}\limits _{h=1}(1-\log _{\theta _h}{\mu ^{+}_h})^{\psi _h}, -1\\{} & {} \qquad +\prod ^{3}\limits _{j=1}(1-|\log _{\theta _h}{(1+{\nu }^{-}_h)|}) ^{\psi _h}\Bigg \rangle \\{} & {} \quad =\Bigg \langle (1-\log _{0.1}{0.6})^{0.5}\times (1-\log _{0.1}{0.7})^{0.2}\\{} & {} \qquad \times (1-\log _{0.1}{0.4})^{0.3}, -1+(1-|\log _{0.1}{(1-0.3)|})^{0.5}\\{} & {} \qquad \times (1-|\log _{0.1}{(1-0.5)|})^{0.2}\\{} & {} \qquad \times (1-|\log _{0.1}{(1-0.6)|})^{0.3}\Bigg \rangle \\{} & {} \quad =\langle 0.7324,-0.2651 \rangle . \end{aligned}$$
Definition 4.6
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \}\) \((h=1,2,\ldots ,m)\) be a set of BFNs. and let \(L-BFOWG:\Omega ^{m}\rightarrow \Omega \) is a function of dimension n is L-BFOWG operator of which weight vector of \(\log _{\theta _j}{L_h}\) is \(\psi =(\psi _1,\psi _2,\ldots ,\psi _m)^\mathrm{{T}}\) where \(\psi _h>0\), \(\sum ^{m}\nolimits _{h=1}\psi _h=1\)
$$\begin{aligned} \mathrm{{L-BFOWG}}(L_1,L_2,\ldots ,L_m)=\bigotimes ^{m}\limits _{j=1} \Bigg (\log _{\theta _{\sigma (h)}}{L_{\sigma (h)}}\Bigg )^{\psi _h},\nonumber \\ \end{aligned}$$
(14)
where \(0<\theta \le \min \nolimits _{h} \Bigg \{{\mu }^{+}_{\sigma (h)}, 1+{\nu }^{-}_{\sigma (h)}\Bigg \}\le 1\), \(\theta \ne 1\) and \(\sigma \) is a permutation of \((h=1,2,\ldots ,m)\), such that \(L_{\sigma (h-1)}\le L_{\sigma (h)}\) for \(h=1,2,\ldots , m\).
Table 1
Decision matrix \(R_1\)
 
\(G_1\)
\(G_2\)
\(G_3\)
\(G_4\)
\(x_1\)
(0.5, \(-\) 0.3)
(0.2, \(-\) 0.1)
(0.6, \(-\) 0.3)
(0.4, \(-\) 0.2)
\(x_2\)
(0.6, \(-\) 0.4)
(0.6, \(-\) 0.4)
(0.4, \(-\) 0.5)
(0.4, \(-\) 0.1)
\(x_3\)
(0.4, \(-\) 0.1)
(0.2, \(-\) 0.3)
(0.2, \(-\) 0.6)
(0.5, \(-\) 0.2)
\(x_4\)
(0.3, \(-\) 0.4)
(0.7, \(-\) 0.4)
(0.7, \(-\) 0.5)
(0.3, \(-\) 0.4)
\(x_5\)
(0.6, \(-\) 0.5)
(0.6, \(-\) 0.2)
(0.3, \(-\) 0.2)
(0.4, \(-\) 0.1)
Table 2
Decision matrix \(R_2\)
 
\(G_1\)
\(G_2\)
\(G_3\)
\(G_4\)
\(x_1\)
(0.4, \(-\) 0.2)
(0.6, \(-\) 0.2)
(0.4, \(-\) 0.1)
(0.5, \(-\) 0.2)
\(x_2\)
(0.5, \(-\) 0.2)
(0.3, \(-\) 0.1)
(0.6, \(-\) 0.2)
(0.3, \(-\) 0.1)
\(x_3\)
(0.3, \(-\) 0.4)
(0.4, \(-\) 0.2)
(0.3, \(-\) 0.5)
(0.2, \(-\) 0.4)
\(x_4\)
(0.6, \(-\) 0.3)
(0.7, \(-\) 0.3)
(0.4, \(-\) 0.2)
(0.7, \(-\) 0.4)
\(x_5\)
(0.5, \(-\) 0.4)
(0.5, \(-\) 0.2)
(0.4, \(-\) 0.5)
(0.4, \(-\)  0.2)
Table 3
Decision matrix \(R_3\)
 
\(G_1\)
\(G_2\)
\(G_3\)
\(G_4\)
\(x_1\)
(0.3, \(-\) 0.5)
(0.6, \(-\) 0.2)
(0.4, \(-\) 0.2)
(0.7, \(-\) 0.2)
\(x_2\)
(0.5, \(-\) 0.2)
(0.3, \(-\) 0.1)
(0.7, \(-\) 0.3)
(0.4, \(-\) 0.2)
\(x_3\)
(0.2, \(-\) 0.6)
(0.2, \(-\) 0.4)
(0.3, \(-\) 0.1)
(0.2, \(-\) 0.3)
\(x_4\)
(0.3, \(-\) 0.2)
(0.4, \(-\) 0.5)
(0.3, \(-\) 0.1)
(0.2, \(-\) 0.1)
\(x_5\)
(0.6, \(-\) 0.1)
(0.5, \(-\) 0.2)
(0.4, \(-\) 0.3)
(0.5, \(-\) 0.4)
Table 4
Aggregated values for the decision-makers
 
\(G_1\)
\(G_2\)
\(G_3\)
\(G_4\)
\(x_1\)
(0.6274, \(-\) 0.1664)
(0.6062, \(-\) 0.0666)
(0.7029, \(-\) 0.1031)
(0.7107, \(-\) 0.0969)
\(x_2\)
(0.7416, \(-\) 0.1466)
(0.6594, \(-\) 0.1008)
(0.7303, \(-\) 0.1939)
(0.5763, \(-\) 0.0560)
\(x_3\)
(0.5067, \(-\) 0.1180)
(0.3860, \(-\) 0.1532)
(0.3955, \(-\) 0.2081)
(0.5413, \(-\) 0.1331)
\(x_4\)
(0.5707, \(-\) 0.1633)
(0.8002, \(-\) 0.2218)
(0.7327, \(-\) 0.1395)
(0.5725, \(-\) 0.1449)
\(x_5\)
(0.7620, \(-\) 0.1687)
(0.7416, \(-\) 0.0969)
(0.5438, \(-\) 0.1428)
(0.6309, \(-\) 0.0833)
Theorem 4.12
Let \(L_h=\{\langle {\mu }^{+}_h,{\nu }^{-}_h\rangle \}\) \((h=1,2,\ldots ,m)\) be a set of BFNs, where \(0<\theta \le \min \nolimits _{h} \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1\), \(\theta \ne 1\). Then, aggregated values of BFNs \(L_h\) based on L-BFOWG operators are also a BFN. Further, if
$$\begin{aligned}{} & {} \mathrm{{L-BFOWG}}\log _\theta (L_1,L_2,\ldots ,L_m)\nonumber \\{} & {} \quad =\left\{ \begin{array}{ll} \bigg \langle \prod ^{m}\limits _{h=1}\Bigg (1-\log _{\theta _{\sigma (h)}}{{{\mu }^{+}_{\sigma (h)}}}\Bigg )^{^{\psi _h}}, -1\\ \quad +\prod ^{m}\limits _{h=1}\Bigg (1-\Bigg (|\log _{\theta _{\sigma (h)}} {{\Bigg (1+{\nu }^{-}_{\sigma (h)}\Bigg )}}|\Bigg )\Bigg )^{^{\psi _h}}\bigg \rangle ,\\ \Bigg \langle \prod ^{m}\limits _{h=1}\Bigg (1-\log _{\frac{1}{\theta _{\sigma (h)}} {{{\mu }^{+}_{\sigma (h)}}}\Bigg )^{\psi _h}}, -1\\ \quad +\prod ^{m}\limits _{h=1}\Bigg (1-|\log _{\frac{1}{\theta _{\sigma (h)}}} {{(1+{\nu }^{-}_{\sigma (h)})}}|\Bigg )\Bigg )^{^{\psi _h}}\Bigg \rangle ,\\ \end{array}\right. \end{aligned}$$
(15)
where \(0<\theta _{\sigma (h)}\le \min \Bigg \{{\mu }^{+}_{\sigma (h)}, 1+{\nu }^{-}_{\sigma (h)}\Bigg \}\le 1, \theta _{\sigma (h)}\ne 1\) and \(0<{\frac{1}{\theta _{\sigma (h)}}}\le \min \Bigg \{{\mu }^{+}_{\sigma (h)}, 1+{\nu }^{-}_{\sigma (h)}\Bigg \}\le 1, \theta _h\ne 1\)
Table 5
Aggregated values for the decision-makers
 
\(G_1\)
\(G_2\)
\(G_3\)
\(G_4\)
\(x_1\)
(0.6092, \(-\) 0.1848)
(0.4840,\(-\) 0.0717)
(0.6845,\(-\) 0.1152)
(0.6828, \(-\) 0.0969)
\(x_2\)
(0.7475, \(-\) 0.1617)
(0.6093,\(-\) 0.1383)
(0.6999,\(-\) 0.2196)
(0.5707, \(-\) 0.0598)
\(x_3\)
(0.4733, \(-\) 0.1960)
(0.3531,\(-\) 0.1608)
(0.3790,\(-\) 0.2944)
(0.4587, \(-\) 0.1428)
\(x_4\)
(0.5339, \(-\) 0.1744)
(0.7712,\(-\) 0.2296)
(0.6699,\(-\) 0.1936)
(0.4805, \(-\) 0.1778)
\(x_5\)
(0.7592, \(-\) 0.2207)
(0.7375,\(-\) 0.0969)
(0.5360,\(-\) 0.1637)
(0.6268, \(-\) 0.1083)
Table 6
Normalized decision matrix
 
\(G_1\)
\(G_2\)
\(G_3\)
\(G_4\)
\(x_1\)
(0.6274, \(-\) 0.1664)
(0.6062, \(-\) 0.0666)
(0.7029, \(-\) 0.1031)
(0.7107, \(-\) 0.0969)
\(x_2\)
(0.7416, \(-\) 0.1466)
(0.6594, \(-\) 0.1008)
(0.7303, \(-\) 0.1939)
(0.5763, \(-\) 0.0560)
\(x_3\)
(0.5067, \(-\) 0.1180)
(0.3860, \(-\) 0.1532)
(0.3955, \(-\) 0.2081)
(0.5413, \(-\) 0.1331)
\(x_4\)
(0.5707, \(-\) 0.1633)
(0.8002, \(-\) 0.2218)
(0.7327, \(-\) 0.1395)
(0.5725, \(-\) 0.1449)
\(x_5\)
(0.7620, \(-\) 0.1687)
(0.7416, \(-\) 0.0969)
(0.5438, \(-\) 0.1428)
(0.6309, \(-\) 0.0833)
Table 7
Normalized weighted decision matrix
 
\(G_1\)
\(G_2\)
\(G_3\)
\(G_4\)
\(x_1\)
(0.3292, \(-\) 0.5302)
(0.4138, \(-\) 0.2929)
(0.3324, \(-\) 0.5109)
(0.2907, \(-\) 0.5705)
\(x_2\)
(0.3997, \(-\) 0.5122)
(0.4504, \(-\) 0.3408)
(0.3547, \(-\) 0.5939)
(0.2269, \(-\) 0.5149)
\(x_3\)
(0.2629, \(-\) 0.4832)
(0.2659, \(-\) 0.3986)
(0.1813, \(-\) 0.6043)
(0.2118, \(-\) 0.6063)
\(x_4\)
(0.2975, \(-\) 0.5275)
(0.5584, \(-\) 0.4602)
(0.3562, \(-\) 0.5485)
(0.2253, \(-\) 0.6164)
\(x_5\)
(0.4138, \(-\) 0.5322)
(0.5106, \(-\) 0.3358)
(0.2536, \(-\) 0.5516)
(0.2515, \(-\) 0.5545)

Traditional MABAC

Let there be a set of g choices \(\{x_1,x_2,\ldots ,x_g\}\), and m criteria \(\{G_1,G_2,\ldots ,G_m\}\) with set of weight vector \(\{\psi _1,\psi _2,\ldots ,\psi _m\}\), \((h=1,2,\ldots ,m)\) and s experts \(\{e_1,e_2,\ldots ,e_s\}\) with weighting vector to be \(\{w_1,w_2,\ldots ,w_m\}\), and the expression is then followed by the form of the MABAC model’s traditional approach.
Step 1 Evaluation matrix formation \(R=[\beta ^{s}_{gh}]_{s\times m}\), where \(g=1,2,\ldots ,l\), \(h=1,2,\ldots ,m\) as
$$\begin{aligned} R_{g\times h}=x_{gh}=\begin{array}{cc} &{} G_1\; \;\;\;\; G_2\; \;\;\; \;\;\;\cdots \; \; G_m \\ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_m \end{array} &{}\left[ {\begin{array}{*{20}c} \beta ^{s}_{11} &{}\quad \beta ^{s}_{12} &{}\quad \ldots &{}\quad \beta ^{s}_{1 m} \\ \beta ^{s}_{21} &{}\quad \beta ^{s}_{22} &{}\quad \ldots &{}\quad \beta ^{s}_{2m} \\ \vdots &{}\quad \vdots &{}\quad \;\; \;\;\;\ddots &{}\quad \vdots \\ \beta ^{s}_{l1} &{}\quad \beta ^{s}_{l2} &{}\quad \ldots &{}\quad \beta ^{s}_{lm} \end{array}} \right] , \end{array} \end{aligned}$$
where \(\beta ^{s}_{gh}\) \((g=1,2,\ldots ,l; h=1,2,\ldots , m)\) follow the inspection rules of option \(x_g\) based on the attributes \(G_h\) \((h=1,2,\ldots ,m)\) by the specialists \(e^s\).
Step 2: Utilize to collect total values based on some AOs \(\beta ^{s}_{gh}\) to \(\beta _{gh}\).
Step 3: Ensure that the fuse matrix is uniform \({\tilde{D}}=[x_{gh}]_{l\times m}\), \(g=1,2,\ldots ,l; h=1,2,\ldots ,m\) using the nature of each attributes as per rule
$$\begin{aligned} R_{ij}= & {} \beta _{gh}, ~\mathrm{{for~benefit~attriute}} \end{aligned}$$
(16)
$$\begin{aligned} R_{ij}= & {} 1-\beta _{gh}, ~\mathrm{{for~cost~attriute}}, \nonumber \\{} & {} (g=1,2,\ldots ,l;h=1,2,\ldots ,m). \end{aligned}$$
(17)
Step 4: For the uniform matrix \(\beta _{gh}\) \((g=1,2,\ldots ,l; h=1,2,\ldots ,m)\) and criteria weight \(\psi _h\) \((h=1,2,\ldots , m)\), then weighted matrix that has been normalized \(\Psi \beta _{gh}\) by the following method:
$$\begin{aligned} \Psi \beta _{gh}=\psi _h \beta _{gh},\;\; g=1,2,\ldots ,l; h=1,2,\ldots ,m. \end{aligned}$$
(18)
Step 5: Obtain the values for the BAA matrix and the border approximation areas (BAA) \(T=[t_{h}]_{1\times m}\) can be assessed as
$$\begin{aligned} t_h=\left( \prod ^{l}\limits _{g=1}\Psi \beta _{gh}\right) ^{1/l}, (g=1,2,\ldots ,l; h=1,2,\ldots ,m).\nonumber \\ \end{aligned}$$
(19)
Step 6: Calculate the distance between each option \(D=[d_{gh}]_{l\times m}\) and BAA calculated using the following formulas:
$$\begin{aligned} d_{lm}=\left\{ \begin{array}{ll} d\Bigg (\Psi x_{gh}, t_h\Bigg ), &{} \mathrm{\;\ if}\;\ \;\ \Psi x_{gh}> t_h\\ 0, &{} \mathrm{\;\ if}\;\ \;\ \Psi x_{gh}= t_h\\ -d\Bigg (\Psi x_{gh}, t_h\Bigg ), &{} \mathrm{\;\ if} \;\ \;\ \Psi x_{gh}< t_h, \end{array}\right. \end{aligned}$$
(20)
where \(d\Bigg (\Psi x_{gh}, t_h\Bigg )\) is the mean distance from \(\Psi \beta _{gh}\) to \(t_h\). By the values of \(d_{gh}\), we can find the following:
where \(d\Bigg (\Psi x_{gh}, t_h\Bigg )\) denotes the average distance between \(\Psi \beta _{gh}\) and \(t_h\). We can deduce the following from the values of \(d_{gh}\):
  • if \(d_{gh}>0\), this indicates that the selections are in the upper approximation region \(g^{+}(\mathrm{{UAA}})\).
  • if \(d_{gh}=0\), this indicates that the options are part of the boundary approximation region \(g^{+}(\mathrm{{BAA}})\)
  • if \(d_{gh}<0\), this signifies that options are in the \(g^{-}(\mathrm{{LAA}})\) lower approximation zone.
It is self-evident that the best alternatives belong to \(g^{+}(\mathrm{{UAA}})\), whereas the worst alternatives belong to \(g^{-}(\mathrm{{LAA}})\).
Step 7: Sums the values of each choices \(d_{gh}\) by the formula
$$\begin{aligned} S_g=\sum ^{m}\limits _{h=1} d_{gh}. \end{aligned}$$
(21)

MADM model of the proposed method

We have developed an MAGDM technique using LBFW operators in this paper. Let \(x=\{x_1, x_2,\ldots , x_l\}\) be a set of suppliers which is a rating by a set of experts \(e=\{e_1,e_2,\ldots ,e_s\}\) under the set of attributes \(G=\{G_1, G_2, \ldots , G_m\}\). Assume that \(R=\Bigg (\beta ^{k}_{gh}\Bigg )\) is a judgment matrix, and that \(\beta ^{k}_{gh}=(\mu ^{+}_{\beta ^{k}_{gh}}, \nu ^{-}_{\beta ^{k}_{gh}})_{l\times m}\) is judgment matrix, where \(\mu ^{+}_{\beta ^{k}_{gh}}\) is the PMD for which choice \(x_g\) satisfies the attribute \(G_h\) by the DMs \(e_k\), and \(\nu ^{-}_{\beta ^{k}_{gh}}\) is the NNMD where \(x_g\) which not satisfy the DMs’ attribute \(G_h\), and \(\mu ^{+}_{gh}\subset [-1,1]\), \(\nu ^{-}_{gh}\subset [-1,1]\) with the condition \(-1\le \mu ^{+}_{\beta ^{k}_{gh}}+\nu ^{-}_{\beta ^{k}_{gh}}\le 1\), \((g=1,2,\ldots ,l)\), \((h=1,2,\ldots ,m)\). Those who make decisions the importance of the qualities \(G_h\in G\) is also determined by \(e_k\) \((k=1,2,\ldots , s)\). Let weight vector be \(\psi =(\psi _1,\psi _2,\ldots ,\psi _m)\) for \(G_h\) \((h=1,2,\ldots ,m)\) is known and \(\psi _h\in [0,1]\), \(\sum ^{m}\nolimits _{h=1}\psi _h=1\). Assume that the DMs’ weighting vector is \(w=(w_1,w_2,\ldots ,w_k)\), with \(w_k\in [0,1]\), such as \(\sum ^{s}\nolimits _{k=1}w_k=1\). As a result, we created an Algorithm for the GDM technique employing bipolar fuzzy information based on the AOs, which consists of the following steps:
Algorithm 1: Logarithm BFN operators
Input: BFN information matrix
Output: Chose suitable supplier.
Step 1. Regarding the judgement matrix R(k), arrange the information for each possibility as follows:
$$\begin{aligned} R^{(k)}=\begin{array}{cc} &{} G_1\; \;\;\;\; G_2\; \;\;\; \;\;\;\cdots \; \; G_m \\ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_l \end{array} &{}\left[ {\begin{array}{*{20}c} \beta ^{(k)}_{11} &{}\quad \beta ^{(k)}_{12} &{}\quad \ldots &{}\quad \beta ^{(k)}_{1m} \\ \beta ^{(k)}_{21} &{}\quad \beta ^{(k)}_{22} &{}\quad \ldots &{}\quad \beta ^{(k)}_{2m} \\ \vdots &{}\quad \vdots &{}\quad \;\; \;\;\;\ddots &{}\quad \vdots \\ \beta ^{(k)}_{l1} &{}\quad \beta ^{(k)}_{l2} &{}\quad \ldots &{}\quad \beta ^{(k)}_{lm} \end{array}} \right] . \end{array} \end{aligned}$$
(22)
Step 2: Uniform BF judgement matrix \(R=(\beta _{gh})_{l\times m}\) into \({\tilde{R}}=(\tilde{\beta }_{gh})_{l\times m} =(\mu ^{+}_{gh},\nu ^{-}_{gh})_{l\times m}\)
$$\begin{aligned} \beta _{gh}=\left\{ \begin{array}{ll} (\mu ^{+}_{gh},\nu ^{-}_{gh}), &{} \mathrm{\; If}\;\ G_h \ \mathrm{{are\, benefit\, type}}\\ (\nu ^{-}_{gh},\mu ^{+}_{gh}), &{} \mathrm{\; If}\;\ G_h \ \mathrm{{are\, cost\, type}}.\\ \end{array}\right. \end{aligned}$$
(23)
Step 3: Utilize L-BFWA operator or L-BFWG to accumulate the \(\beta ^{k}_{gh}\) \((g=1,2,\ldots , l)\) of the choice \(x_l\), then get averaged BFN \(\beta ^{k}_{g}\) for the choice \(x_l\) overall attribute weight for the DMs \(d_k\)
$$\begin{aligned} \beta ^{k}_p= & {} \mathrm{{L-BFWA}}(\beta ^{(k)}_{g1},\beta ^{(k)}_{g2},\ldots ,\beta ^{(k)}_{gn}) =\bigoplus ^{m}\limits _{h=1}(w^{(k)}_{gh}\beta ^{(k)}_{gh}) \nonumber \\= & {} \Bigg \langle 1\!-\!\prod ^{m}\limits _{h=1}\!\Bigg (\log _{\theta _h}{{{\mu }^{+}_h}}\Bigg )^{\psi _h},\!-\prod ^{m}\limits _{h=1}\!\Bigg (|\log _{\theta _h}{{\big (1\!+\!{\nu }^{-}_h\big )}}|\Bigg )^{\psi _h}\Bigg \rangle ;\nonumber \\ \end{aligned}$$
(24)
where \( 0<\theta _h\le \min \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1, \theta _h\ne 1\) or \(\beta ^{(k)}_p=\mathrm{{L-BFWG}}(\beta ^{(k)}_{g1},\beta ^{(k)}_{g2},\ldots ,\beta ^{(k)}_{gh}) =\bigotimes ^{m}\nolimits _{h=1}(\beta ^{(k)}_{gh})^{w^{(k)}_{lm}}\)
$$\begin{aligned}= & {} \Bigg \langle \prod ^{m}\limits _{h=1}\Bigg (1-\log _{\theta _h}{{{\mu }^{+}_h}}\Bigg )^{\psi _h},-1\nonumber \\{} & {} +\prod ^{m}\limits _{h=1}\Bigg (1-|\log _{\theta _h}{{\big (1+{\nu }^{-}_h\big )}}|\Bigg )^{\psi _h}\Bigg \rangle ; \end{aligned}$$
(25)
\(0<\theta _h\le \min \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1, \theta _h\ne 1\) to find out what companies overall preferences are \(\beta _g\) \((g=1,2,\ldots ,l)\) of the supplier \(x_g\).
Step 4: Applying L-BFWA or L-BFWG accumulation to compute the BFNs for the choices \(x_g\) \((g=1,2,\ldots ,l)\) for collective total preference values \(\beta _g\) \((g=1,2,\ldots ,l)\) were \(\psi =(\psi _1,\psi _2,\ldots ,\psi _m)^\mathrm{{T}}\) is the attribute’s weight vector.
Step 5. Obtain the score of \(\Theta (\beta _g)\) \((g=1,2,\ldots ,l)\) applying on overall BFNs \(\beta _g\) \((g=1,2,\ldots ,l)\) to ranking \(x_g\) for the choice of best \(x_g\). If \(\Theta (\beta _g)\) and \(\Theta (\beta _g)\) are similar, then agreed to calculate accuracy value \(\Phi (\beta _g)\) and \(\Phi (\beta _h)\) established on overall BFNs of \(\beta _g\) and \(\beta _h\), and ranked the options \(x_g\) by accuracy value \(\Phi (\beta _g)\) and \(\Phi (\beta _h)\).
Step 6. In accordance with \(\Theta (\beta _g)\) \((g=1,2,\ldots ,l)\), rank all \(x_g\) \((g=1,2,\ldots ,l)\) to choose the desirable one(s).
Step 7. Stop.
Algorithm 2: Logarithm bipolar fuzzy MABAC approach
Input: Use desirable alternatives values in the form of decision matrix.
Output: Evaluation of best supplier.
Step 1–3: Same steps 1–3 as algorithm 1.
Step 4: The judgement matrix \(\beta _{gh}=\Bigg (\mu ^{+}_{gh},\nu ^{-}_{gh}\Bigg )\) \((g=1,2,\ldots ,l; h=1,2,\ldots ,m)\) Weight is supported via attributes \(\psi _h\) \((h=1,2,\ldots ,m)\), then we \(\Psi R_{gh}=\Bigg ({\mu '}^{+}_{gh},{\nu '}^{-}_{gh}\Bigg )\), \(g=1,2,\ldots ,l; h=1,2,\ldots , m\) using the rule
$$\begin{aligned} \Psi \beta _{gh}= & {} \Bigg \langle 1-\prod ^{m}\limits _{h=1}\Bigg (\log _{\theta _h}{{{\mu }^{+}_h}}\Bigg )^{\psi _h},\nonumber \\{} & {} -\prod ^{m}\limits _{h=1}\Bigg (|\log _{\theta _h}{{\big (1+{\nu }^{-}_h\big )}}|\Bigg )^{\psi _h}\Bigg \rangle , \end{aligned}$$
(26)
where \(0<\theta _h\le \min \Bigg \{{\mu }^{+}_h, 1+{\nu }^{-}_h\Bigg \}\le 1, \theta _h\ne 1\)
$$\begin{aligned}{} & {} [x_{gh}]_{l\times m}\nonumber \\{} & {} =\begin{array}{ll} &{} G_1\; \;\;\;\; G_2\; \;\;\; \;\;\;\cdots \; \; G_m \\ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_l \end{array} &{}\left[ {\begin{array}{*{20}c} \Bigg (\mu ^{+}_{11}, \nu ^{-}_{11}\Bigg ) &{}\quad \Bigg (\mu ^{+}_{12},\nu ^{-}_{12}\Bigg ) &{}\quad \ldots &{}\quad \Bigg (\mu ^{+}_{1m},\nu ^{-}_{1m}\Bigg ) \\ \Bigg (\mu ^{+}_{21},\nu ^{-}_{21}\Bigg ) &{}\quad \Bigg (\mu ^{+}_{22},\nu ^{-}_{22}\Bigg ) &{}\quad \ldots &{}\quad \Bigg (\mu ^{+}_{2m},\nu ^{-}_{2m}\Bigg )\\ \vdots &{}\quad \vdots &{}\quad \;\; \;\;\;\ddots &{}\quad \vdots \\ \Bigg (\mu ^{+}_{l1},\nu ^{-}_{l1}\Bigg ) &{}\quad \Bigg (\mu ^{+}_{l2},\nu ^{-}_{l2}\Bigg ) &{}\quad \ldots &{}\quad \Bigg (\mu ^{+}_{lm},\nu ^{-}_{lm}\Bigg ) \end{array}} \right] , \end{array}\nonumber \\ \end{aligned}$$
(27)
where \(x_{gh}=\Bigg (\mu ^{+}_{gh},\nu ^{-}_{gh}\Bigg )\) \((g=1,2,\ldots , l; h=1,2,\ldots , m)\), and specialists \(e_s\) created a rule for BF information of each choice \(x_g\) depending on the attributes \(G_h\) \((h=1,2,\ldots ,m)\).
Step 5: Calculate BAA values, and the BAA matrix \(T=[t_{h}]_{1\times m}\) can be computed as follows:
$$\begin{aligned} t_h= & {} \left( \prod ^{m}\limits _{g=1}\Psi x_{gh}\right) ^{1/l}, (g=1,2,\ldots ,l; h=1,2,\ldots ,m)\nonumber \\= & {} \left\{ \Bigg (\prod ^{l}\limits _{g=1}{\mu ^{+}_{gh}}\Bigg )^{1/l}, -1+\prod ^{l}\limits _{g=1}\Bigg (1+\nu ^{-}_{gh}\Bigg )^{1/l}\Bigg )\right\} . \end{aligned}$$
(28)
Step 6: Take a look at the distance \(D=[d_{gh}]_{l\times m}\) between each alternative and BAA as measured by the rules
$$\begin{aligned} d_{gh}=\left\{ \begin{array}{ll} d\Bigg (\Psi x_{gh}, t_h\Bigg ), &{} \mathrm{\;\ if}\;\ \;\ \Psi x_{gh}> t_h\\ 0, &{} \mathrm{\;\ if}\;\ \;\ \Psi x_{gh}= t_h\\ -d\Bigg (\Psi x_{gh}, t_h\Bigg ), &{} \mathrm{\;\ if} \;\ \;\ \Psi x_{gh}< t_h, \end{array}\right. \end{aligned}$$
(29)
where \(d(\Psi x_{gh}, t_h)\) is the mean distance between \(\Psi x_{gh}\) and \(t_h\), which can be determined using Definition 3.8.
Step 7: The following formula is used to add the values of each choice \(d_{gh}\):
$$\begin{aligned} S_g=\sum ^{m}_{h=1} d_{gh}. \end{aligned}$$
(30)

Numerical example

Numerical MAGDM model for BFNs

Supply selection is a crucial aspect of any industry. The quality of a company’s product and its economic growth is both dependent on this issue. However, selecting a suitable supplier is a challenging task in general. For the supplier section problem, the recommended problems have been built up. A manufacturing company was looking for a supplier for one of the most essential items utilized in the assembly process. Let \(X=x_1,x_2,\ldots ,x_5\) represent a set of five prospective worldwide suppliers to examine as alternatives, and \(G=G_1,G_2,G_3,G_4\) represent a set of attributes to analyze, which are such: \((G_1)\): Cost, \((G_2)\): Quality \((G_3)\): Cope with technology \((G_4)\): Reputation The criteria’s weight vector is \(\omega =(0.25,0.35,0.22,0.18)^\mathrm{{T}}\). A committee of three decision-makers reviewed five suppliers. The best provider is chosen by a team of three experts based on their ranking. Following the decision matrices, the DMs rate the items regarding BFVs.
Step 1: The DMs have given their decisions (Tables 1,  2 and  3).
Step 2: There is no need for normalization as all the attributes are benefit types.
Step 3: Applying L-BFWA operator using DMs weighting vector \((0.5,0.23,0.27)^\mathrm{{T}}\), and Table 4 summarizes the information provided by the three decision-makers.
Step 4: For collective BFNs for the options, use the L-BFWA operator for the attribute weight vector \(\psi =(0.25,0.35,0.22,0.18)^\mathrm{{T}}\), \(x_i\), \((i=1,2,3,4,5)\) as \(\beta _1=(0.8154,-0.0453)\), \(\beta _2=(0.8354,-0.0534)\), \(\beta _3=(0.6508,-0.0706)\) \(\beta _4=(0.8453,-0.0821)\), \(\beta _5=(0.8397,-0.0547)\).
Step 5: Compute the score values of the alternative \(x_g\), \((g=1,2,3,4,5)\) as follows using the Definition 3.3: \(\Theta (\beta _1)=0.8851\), \(\Theta (\beta _2)=0.8910\), \(\Theta (\beta _3)=0.7901\), \(\Theta (\beta _4)=0.8816\), and \(\Theta (\beta _5)=0.8925.\)
Step 6: The rank of suppliers is \(x_5\succ x_2\succ x_1\succ x_4 \succ x_3\).
Step 7: As per the ranking order, the favorable supplier is \(x_5\).
If we used the L-BFWG operator instead of the L-BFWA operators and applied the same procedure:
Step 1–2: It is the same procedure as before.
Step 3: Applying L-BFWG operator using DMs’ weighting vector \((0.5,0.23,0.27)^\mathrm{{T}}\) to combine the information provided by the three decision-makers in Table 5.
Step 4: Again utilizing L-BFWG operator, \(\psi =(0.25,0.35,0.22,0.18)^\mathrm{{T}}\) is set as the attribute weight vector for the alternatives’ aggregate BFNs \(x_g\), \((g=1,2,3,4,5)\) as \(\beta _1=(0.7670,-0.0534)\), \(\beta _2=(0.8139,-0.0706)\), \(\beta _3=(0.6044,-0.0962)\), \(\beta _4=(0.8925,-0.1964)\), and \(\beta _5=(0.8255,-0.0689)\).
Step 5: Compute the scores values for the suppliers: \(\Theta (\beta _1)=0.8568\), \(\Theta (\beta _2)=0.8717\), \(\Theta (\beta _3)=0.7541\), \(\Theta (\beta _4)=0.8481\), and \(\Theta (\beta _5)=0.8783\).
Step 6: Using the score values, the ranking order is \(x_5\succ x_2\succ x_1\succ x_4 \succ x_3\).
Step 7: Here, desirable supplier is \(x_5\).
Approach 2:
Step 1–3: In steps 1–3, it is the same as Algorithm 1.
Step 4: Normalization is not required, because all of the attributes are benefit kinds. Table 6 shows how to calculate the normalization matrix.
Step 5: For the normalized decision matrix using Eq. (18) given in Table 7.
Step 6: Equation (28) is used to calculate the values of BAA and BAA matrix. \(t_1=(0.3356,-0.5173)\), \(t_2=(0.4266,-0.3684)\)
\(t_3=(0.2865,-0.5631)\), \(t_4=(0.2397,-0.5741)\)
Step 7: Calculate the distance d between options and BAA in Table 8 using Eq. (29).
Step 8: Calculate the sums of the distances \(S_i\) for each option using Eq. (30):
$$\begin{aligned} S_1= & {} \sum ^{n}\limits _{j=1} d_{1j} \nonumber \\ {}= & {} (-0.0097)+0.0442+0.0491+0.0273=0.1109,\nonumber \\ {}{} & {} \textrm{for alternative} x_1. \nonumber \\ S_2= & {} \sum ^{n}\limits _{j=1} d_{2j} \nonumber \\ {}= & {} 0.0336+0.0237+0.0445+0.0360=0.1378,\nonumber \\ {}{} & {} \textrm{for alternative} x_2. \nonumber \\ S_3= & {} \sum ^{n}\limits _{j=1} d_{3j}\nonumber \\ {}= & {} (-0.0534)+(-0.0955)+(-0.0732)+(-0.0301)\nonumber \\ {}{} & {} =-0.2522, \textrm{for alternative} x_3.\nonumber \\ S_4= & {} \sum ^{n}\limits _{j=1} d_{4j} \nonumber \\ {}= & {} (-0.0242)+0.1118+0.0422 +(-0.0284)=0.1014,\nonumber \\ {}{} & {} \textrm{for alternative} x_4.\nonumber \\ S_5= & {} \sum ^{n}\limits _{j=1} d_{5j} \nonumber \\ {}= & {} 0.0466+0.0583+0.0222+0.0157=0.1428,\nonumber \\ {}{} & {} \textrm{for alternative} x_5. \end{aligned}$$
To determine the better choice, use the \(S_i\) comprehensive evaluation findings. We obtain the order list:\(x_5\succ x_2\succ x_1\succ x_4\succ x_3\), and \(x_5\) is the most advantageous option.
Table 8
Distance between alternatives and BAA
Alternatives
\(G_1\)
\(G_2\)
\(G_3\)
\(G_4\)
\(x_1\)
\(-\) 0.0097
0.0442
0.0491
0.0273
\(x_2\)
0.0343
0.0255
0.0493
0.0360
\(x_3\)
\(-\) 0.0534
\(-\) 0.0955
\(-\) 0.0732
\(-\) 0.0301
\(x_4\)
\(-\) 0.0242
0.1118
0.0422
\(-\) 0.0284
\(x_5\)
0.0466
0.0583
0.0222
0.0157
Table 9
BFWA and BFWG operators
Alternative
BFWA
BFWG
\(x_1\)
(0.4672, \(-\) 0.2002)
(0.4160, \(-\) 0.2287)
\(x_2\)
(0.4952, \(-\) 0.2256)
(0.4606, \(-\) 0.2832)
\(x_3\)
(0.2931, \(-\) 0.2805)
(0.2680, \(-\) 0.3519)
\(x_4\)
(0.5278, \(-\) 0.3192)
(0.4505, \(-\) 0.3618)
\(x_5\)
(0.5005, \(-\) 0.2322)
(0.4789, \(-\) 0.2779)

Comparative results

To this part, comparing the proposed methodology with some current bipolar fuzzy operators is as weighted bipolar fuzzy BFWA (BFWG) operators [10], which are displayed in Table 9. Another [11] proposed Dombi operators using BFDWA (BFDWG) presented in Table 10. Also, [38] used Hamacher operators BFHWA (BFHWG) and their accumulated results are displayed in Table 11. Thus, the current operators for BFNs with their comparison for verification with MABAC are given in Table 12. The proposed method is also compared with bipolar fuzzy TOPSIS approach introduced by Akram et al. [64]; here, we used a weighted normalized decision matrix given computed in Table 6. The bipolar fuzzy positive ideal solution (BIFPIS) and bipolar fuzzy negative ideal solution (BIFNIS) and their corresponding relative relational degree from BIFNIS are provided in Table 12. The comparison is also made to the present study bipolar complex fuzzy Hamacher aggregation operator proposed by Mahmood et al. [65] and MCDM based on Dombi aggregation operators under bipolar complex fuzzy environment introduced by Mahmood and Rehman [66].
Here, in Table 13, in the present studies [10, 11, 38, 64], the raking order is the same and each study provided \(x_5\) as a desirable alternative, whereas studies [65, 66] cannot be calculated using the proposed method, because they are in a bipolar complex fuzzy environment. When the proposed operators L-BFWA and L-BFWG are applied, the optimal choice is \(x_5\), as shown in comparison in Table 13. The proposed approach for selecting suppliers by L-BFWA and L-BFWG operators is also confirmed by the MABAC method, which yields the same results. It is worth noting that the best alternative chosen by the presented approaches overlaps with several current strategies. As a result, the proposed model is stable and trustworthy.
Table 10
BFDWA and BFDWG operators
Alternative
BFDWA
BFDWG
\(x_1\)
(0.4876, \(-\) 0.1807)
(0.3815, \(-\) 0.2374)
\(x_2\)
(0.5117, \(-\) 0.1868)
(0.4421, \(-\) 0.2985)
\(x_3\)
(0.3022, \(-\) 0.2314)
(0.2540, \(-\) 0.3769)
\(x_4\)
(0.5635, \(-\) 0.2764)
(0.4128, \(-\) 0.3722)
\(x_5\)
(0.5099, \(-\) 0.2042)
(0.4662, \(-\) 0.2931)
Table 11
BFHWA and BFHWG operators
Alternative
BFHWA
BFHWG
\(x_1\)
(0.4550, \(-\) 0.1402)
(0.4278, \(-\) 0.2214)
\(x_2\)
(0.4866, \(-\) 0.1377)
(0.4688, \(-\) 0.2694)
\(x_3\)
(0.2861, \(-\) 0.1578)
(0.2719, \(-\) 0.3341)
\(x_4\)
(0.5090, \(-\) 0.1704)
(0.4685, \(-\) 0.3527)
\(x_5\)
(0.4953, \(-\) 0.1415)
(0.4843, \(-\) 0.2657)
Table 12
Results of \(d(x_i,\mathrm{{BIFPIS}})\), \(d(x_i,\mathrm{{BIFPIS}})\), and \(\rho (x_i)\)
Alternative
\(d(x_i,\mathrm{{BIFPIS}})\)
\(d(x_i,\mathrm{{BIFNIS}})\)
\(\rho (x_i)\)
\(x_1\)
0.1303
0.2169
0.6247
\(x_2\)
0.1139
0.2325
0.6712
\(x_3\)
0.2944
0.0563
0.1605
\(x_4\)
0.1725
0.2495
0.5912
\(x_5\)
0.1043
0.2362
0.6937
Table 13
Comparative ranks
Methods
Ranking order
Gul [10]  
\(x_5\succ x_2\succ x_1\succ x_4\succ x_3\)
Gul [10]  
\(x_5\succ x_1\succ x_2\succ x_4\succ x_3\)
Jana et al. [11
\(x_5\succ x_2\succ x_1\succ x_4\succ x_3\)
Jana et al. [11
\(x_5\succ x_1\succ x_2\succ x_4\succ x_3\)
Wei et al. [38
\(x_5\succ x_2\succ x_4\succ x_1\succ x_3\)
Wei et al. [38
\(x_5\succ x_1\succ x_2\succ x_4\succ x_3\)
Akram et al. [64]
\(x_5\succ x_1\succ x_2\succ x_4\succ x_3\)
Mahmood et al. [65]
Can not be calculated
Mahmood and Rehman [66]
Can not be calculated
Proposed   L-BFWA
\(x_5\succ x_2\succ x_1\succ x_4 \succ x_3\)
proposed    L-BFWG
\(x_5\succ x_2\succ x_1\succ x_4 \succ x_3\)
Proposed BFNs MABAC model
\(x_5\succ x_2\succ x_1\succ x_4\succ x_3\)

Conclusions

In this article, we describe certain operational principles for BFVs and then introduce some novel AOs, including the L-BFWA operator, L-BFOWA operator, L-BFWG, and L-BFOWG operator. We have also looked into the MABAC technique with the LBFW operator for MAGDM. The BFNs, their score and accuracy functions, and operating guidelines are examined based on logarithm operational laws. We then create an MAGDM method based on logarithmic bipolar fuzzy AOs and a BFNs’ MABAC model for MAGDM. We also use the proposed model to evaluate the efficiency of supplier selection challenges. Finally, we examine the applied model and demonstrate its viability and effectiveness compared to current BF operators. We may use the proposed model in various uncertain, and fuzzy environments in the future [42, 43, 49, 6771], and this method can also be extended to a two-tuple bipolar linguistic environment.

Acknowledgements

The authors wish to thank the anonymous reviewers for their valuable comments and helpful suggestions which greatly improved the quality of this paper.

Declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. The authors also declare that there is no conflict of interest regarding the publication of this paper.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
4.
go back to reference Pramanik S, Mallick R (2018) VIKOR based MAGDM strategy with trapezoidal neutrosophic numbers. Neutrosophic Sets Syst 22:118–130 Pramanik S, Mallick R (2018) VIKOR based MAGDM strategy with trapezoidal neutrosophic numbers. Neutrosophic Sets Syst 22:118–130
7.
go back to reference Zhang WR (1994) Bipolar fuzzy sets and relations: a computational framework for cognitive modeling and multiagent decision analysis," NAFIPS/IFIS/NASA ’94. In: Proceedings of the first international joint conference of the North American fuzzy information processing society biannual conference. The Industrial Fuzzy Control and Intellige, San Antonio, TX, USA, pp 305–309. https://doi.org/10.1109/IJCF.1994.375115 Zhang WR (1994) Bipolar fuzzy sets and relations: a computational framework for cognitive modeling and multiagent decision analysis," NAFIPS/IFIS/NASA ’94. In: Proceedings of the first international joint conference of the North American fuzzy information processing society biannual conference. The Industrial Fuzzy Control and Intellige, San Antonio, TX, USA, pp 305–309. https://​doi.​org/​10.​1109/​IJCF.​1994.​375115
10.
go back to reference Gul Z (2015) Some bipolar fuzzy aggregations operators and their applications in multicriteria group decision making. M.Phil Thesis Gul Z (2015) Some bipolar fuzzy aggregations operators and their applications in multicriteria group decision making. M.Phil Thesis
30.
go back to reference Lu M, Busemeyer JR (2014) Do traditional Chinese theories of Yi Jing (ë Yin-Yang and Chinese Medicine go beyond western concepts of mind and matter. Mind Matter 12(1):37–59 Lu M, Busemeyer JR (2014) Do traditional Chinese theories of Yi Jing (ë Yin-Yang and Chinese Medicine go beyond western concepts of mind and matter. Mind Matter 12(1):37–59
35.
go back to reference Zhang WR (2016) G-CPT symmetry of quantum emergence and submergence-an information conservational multiagent cellular automata unification of CPT symmetry and CP violation for equilibrium-based many world causal analysis of quantum coherence and decoherence. J Quantum Inf Sci 6(2):62–97. https://doi.org/10.4236/jqis.2016.62008CrossRef Zhang WR (2016) G-CPT symmetry of quantum emergence and submergence-an information conservational multiagent cellular automata unification of CPT symmetry and CP violation for equilibrium-based many world causal analysis of quantum coherence and decoherence. J Quantum Inf Sci 6(2):62–97. https://​doi.​org/​10.​4236/​jqis.​2016.​62008CrossRef
Metadata
Title
MABAC framework for logarithmic bipolar fuzzy multiple attribute group decision-making for supplier selection
Authors
Chiranjibe Jana
Harish Garg
Madhumangal Pal
Biswajit Sarkar
Guiwu Wei
Publication date
20-07-2023
Publisher
Springer International Publishing
Published in
Complex & Intelligent Systems / Issue 1/2024
Print ISSN: 2199-4536
Electronic ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-023-01108-1

Other articles of this Issue 1/2024

Complex & Intelligent Systems 1/2024 Go to the issue

Premium Partner