2003 | Chapter
Entropy and Complexity of Sequences
Authors : Werner Ebeling, Miguel Jimenez-Montano, Thomas Pohl
Published in: Entropy Measures, Maximum Entropy Principle and Emerging Applications
Publisher: Springer Berlin Heidelberg
Included in: Professional Book Archive
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
We analyze and discuss here sequences of letters and time series coded as sequences of letters on certain alphabets. Main subjects are macromolecular sequences (e.g., nucleotides in DNA or amino acids in proteins) neural spike trains and financial time series. Several sequence representations are introduced, including return plots, surrogate sequences and surrogate processes. We give a short review of the definition of entropies and some other informational concepts. We also point out that entropies have to be considered as fluctuating quantities and study the corresponding distributions. In the last part we consider grammatical concepts. We discuss algorithms to evaluate the syntactic complexity and information content and apply them to several special sequences. We compare the data from seven neurons, before and after penicillin treatment, by encoding their inter-spike intervals according to their entropies, syntactic-complexity and informational content. Using these measures to classify these sequences with respect to their structure or randomness, give similar results. The other examples show significantly less order.