2013 | OriginalPaper | Chapter
Two New Results for Identification for Sources
Author : Christian Heup
Published in: Information Theory, Combinatorics, and Search Theory
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
We provide two new results for identification for sources. The first result is about block codes. In [Ahlswede and Cai, IEEE-IT, 52(9), 4198-4207, 2006] it is proven that the
q
-ary identification entropy
H
I
,
q
(
P
) is a lower bound for the average number
L
(
P
,
P
) of expected checkings during the identification process. A necessary assumption for this proof is that the uniform distribution minimizes the symmetric running time
$L_{\mathcal C}(P,P)$
for binary block codes
$\mathcal C=\{0,1\}^k$
. This assumption is proved in Sect. 2 not only for binary block codes but for any
q
-ary block code. The second result is about upper bounds for the worst-case running time. In [Ahlswede, Balkenhol and Kleinewchter, LNCS, 4123, 51-61, 2006] the authors proved in Theorem 3 that
L
(
P
) < 3 by an inductive code construction. We discover an alteration of their scheme which strengthens this upper bound significantly.