In everyday parlance, we have an intuitive understanding of what information is. It can be a set of facts that we use to make decisions. It can be a piece of gossip that you hear from a friend. Or it can be whatever is in the back of that truck driving down the “superhighway” of the Internet. For mathematicians and scientists from a broad range of disciplines, however, information has a very specific, and quite different, connotation, and an entire branch of study called information theory has developed over the last century to explain what it is and how it works.
Recently published in paperback, Charles Seife’s book Decoding the Universe is an accessible tour of information theory, and an exploration of why it is so important in the sciences today. Perhaps the key idea that Seife works to communicate is that information is a real, concrete, and physical "thing," capable of being measured and manipulated like other properties such as mass or energy. It is a difficult concept to wrap your head around, but according to the theory, matter, energy, and everything that makes up the universe is reducible to bits, or a collection of physically embodied, yes-or-no, on-or-off propositions that create the code we recognize as higher forms of organization. Anyone reading this online is likely to have heard of the bit in relation to computers, which operate by the manipulation of a binary code expressed in patterns of electrical charges. But it is difficult to fully appreciate the concept when considering that our thoughts, our genetic makeup, and the nature of matter itself all rely on the ability to encode and process information. As researchers have gained the ability to monitor the firing of individual neurons, read the base pairs that constitute the genome, and observe quantum-scale effects, however, it increasingly appears that what appear to be inscrutable, complex processes may be reducible to these simple binary relationships. At the least, the approach has become a very useful set of tools.
Decoding isn’t the type of book that advances a new argument, but it is packed with clear, concise explanations of many of the key concepts that have advanced a certain branch of physics and mathematics in recent years. If you don’t know the difference between Maxwell’s demon and Schrödinger’s cat, or if you want an accessible introduction to some of the current frontiers in physics, this is a great place to start. I did miss more historical perspective, since although information theory as such really only began with Claude Shannon and Alan Turing, there is at least from Renaissance-era bell towers forward a long lineage of devices that articulated principles of information storage and retrieval that would make computation possible. I also wished there had been more about how information is important to the life sciences, and realized that the market is now oversaturated with expositions of the anthropic principle. Nevertheless, the book is recommended.
Also spotted this week was a superb article by Brian Hayes in American Scientist magazine on the conceptual similarities between computer science and the logistics of running a fleet of freight trains. As train systems became increasingly complicated in the 1880s, a genre of mathematical puzzles developed to explore how to move individual cars around a railyard most efficiently using a limited number of switches. Such problems, according to Hayes, "informed the early years of the development of algorithms and data structures in computer science" while "the theoretical analysis of algorithms has suggested ways for railroads to improve their operations."
Comments