Published in:
2025 | OriginalPaper | Chapter
Applying the Event Time Interval Approach to Big Future
Author : Martin van Duin
Published in: Navigating Complexity in Big History
Publisher: Springer Nature Switzerland
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by (Link opens in a new window)
Abstract
The chapter delves into the application of the event time interval approach to big history and big future, focusing on the quantification of trends and the interpretation of event time data. It begins with an overview of quantitative characteristics used to describe trends in big history, highlighting the use of objective data to illustrate and corroborate historical trends. The text then explores the event time interval studies, discussing the method's simplicity and the impressively linear correlations observed in plots of log(Δtime) vs. event number. However, it also critically assesses the concerns and issues related to the event time interval approach, such as the subjective selection of major events and the assumption of equal importance. The chapter applies the event time interval method to a series of important events from the Adams and Laughlin (1997) study, which provides a partly hypothetical overview of the long-term future of the universe. This application reveals a continuous increase in log(Δtime) with cosmological event number, pointing to a slow 'death' of the universe rather than a dramatic singularity. The chapter interprets this continuous deceleration of change in terms of the expanding and cooling universe, leading to a slowdown of cosmological events. It also discusses the potential bias in event selection, looking at the past and future from the present time. The chapter concludes by emphasizing the need for objective, quantitative parameters and new metrics for complexity in big history studies, encouraging further development and exploration in this area.
AI Generated
This summary of the content was generated with the help of AI.
Abstract
Plotting time intervals between major events (Δtime) or simply event time vs. event number (event#) have been used as a method to corroborate qualitative trends of accelerated complexity increase in (big) history. Emphasis of such event time (interval) studies has been on fitting the Δtime data with appropriate mathematical models ([super]exponential, logistic, hyperbolic). This is typically followed by a discussion on the occurrence of a possible singularity or inflection point around the present time. The event time (interval) approach is attractive, because of its simplicity and the impressively linear correlations observed in plots of logarithm of Δtime [log(Δtime)] vs. event#. However, a variety of concerns has been expressed related to the basics underlying this method, which big historians should take at heart. Amongst others, there is no objective definition of a major event nor is there a quantitative criterion for the selection of such an event. Therefore, the selection of major events is subjective. Next, there is no sound support for the underlying assumption that all selected events have equal importance. The event selection is also biased from the perspective of today, resulting in an even spread of events over a logarithmic time scale. To further explore these issues, the event time interval approach is here applied to a series of cosmological events, calculated from first principles by Adams and Laughlin (in Review of Modern Physics 69:337–372, 1997) in their study “A dying universe”. This series of events starts at the Big Bang, proceeds via the formation and subsequent development and degeneration of stars, planets, galaxies, and black holes, as well as the decay of matter to, eventually, a dark, cool universe. The series of 37 major events span 200 orders of magnitude of time and is mainly situated in the future, hence big future. In contrast to previous event time interval studies exploring the past, Δtime increases continuously with cosmological event# both in the past and in the future. This corresponds to a continuous deceleration of change and, thus, points to a slow “death” of the universe instead of a dramatic singularity. This is most probably the result of the continuously expanding and, thus, cooling and thinning universe, resulting in a slowdown of cosmological processes. Log(Δtime) increases with event# via three stages, viz. first an S-shaped increase, followed by a linear, relatively flat part though still somewhat increasing, and, finally, an upswing. The two transitions between these three stages are probably the result of two changes, i.e. first from a homogeneous universe to a universe with local heterogeneity, structured matter and energy-dissipating systems, and next to an increasingly homogeneous universe. Alternatively, the three-stage log(Δtime) curve may be interpreted as evidence for a time bias, looking at both the past and future from the present time. As a result, log(Δtime) values are rather small for events in both the near past and future, as well as an increase for events further away in both the distant and diffuse past and future. Finally, simple simulations with different event time series have shown that an even spread of events over the whole log(time) range results in a linear correlation between log(Δtime) and log(time) vs. event #. In addition, large Δtime values relative to the event times themselves result in a linear correlation between log(Δtime) and log(time). In conclusion, the most reliable way to corroborate qualitative trends in (big) history is the study of objective, quantitative parameters, either for the magnitude of a characteristic or for the diversity of a system, over time. In addition, researchers are encouraged to further develop and explore other and/or new metrics for complexity in a big history context.
Advertisement