I contend that nothing—no event, action, or reaction—is actually random. At first glance, the implications of such a statement is that everything must be pre-determined. However, this is not the case. I assert that things seem random only because we do not have the technology to identify the forces at work that bring about the outcomes we see—outcomes that would be fully explainable had we the ability (or the inclination) to discover them.
The Birthday Cake
If I lean over a birthday cake all ablaze with candles, and if I take in a deep breath and blow the candles out, that action is pre-determined. In fact, it is obvious that the candles were not extinguished by chance but by design. However, unless I have a way to measure all the forces that necessarily come to bear upon the candles in the process of being blown out, I cannot tell why one candle seemed to hold onto its life longer than the one next to it. The order in which the candles are extinguished will be different every time, giving the appearance of randomness, and yet nothing could be further from the truth.
There are many factors that effect the order in which each candle will finally give up its flame: the strength of the air flow, the position of the one blowing the air, perhaps the temperature in the room, the wax in the candle and the wick entombed in the wax, and so on.
I assert that events like the order in which candles are extinguished on a cake at a birthday party seem random only because we do not have the minute technology to identify the forces at work which bring about the logical outcomes we see.
A Glass of Ice Water
For example, a person places a glass of ice water on top of a van and forgets it is there. The van pulls away and, for a while, the glass hangs in there. But, ultimately, it falls off. That the glass of ice water will fall off of the van is, of course, completely predictable. But when it falls and how it flips, etc. are all unpredictable to the human observer. However, had we a De-Randomizing Computer, we would be able to place sensors on the van, the glass, in the water, in the air, on the road (you get the idea) and capture all the data necessary to predict exactly when, where, and how the glass of ice water would meet its demise. These sensors and the instruments connected to them would have to be exponentially sensitive and we would have to run the experiment ad nauseam before we could get it right, but, unless we are prepared to say that glasses of ice tea fall off of moving vans due to none of the physical forces exerted upon them, then we must agree that while it seems that the time, place, and style, if you will, of the glass’s topple from the van is random, it is, in fact, no such thing.
Consider this example a little further, particularly the flight path taken by the glass itself. What forces affected its course? Any number of identifiers will have had an impact upon the glass as it fell, turned, flipped, and ultimately found a landing spot: the size of the glass (tall, wide, short), the composition of the container itself (made of glass or plastic), if made of glass the relative heaviness of the glass bottom, how much of the water remained in the glass as it fell, the ratio of water to ice, and so on. And we could go on and ask about the speed of the van, the terrain it was moving along, the relative turn of the van’s direction, the consistency of the driver’s acceleration. You get the idea.
One last area to consider in this example: what caused the person to put his or her glass on the van? What were the minute thought processes that distracted the person? Let’s say it was a beautiful red and yellow Cardinal flying by. Was that random? No. The Cardinal is making a choice where to fly, the forces at work in the Cardinal’s search for food, etc. Nothing is random. It’s just so intricate that random is the easiest explanation. The De-Randomizing Computer works on people too!
The De-Randomizing Computer is ludicrous, of course, first because most of the technology doesn’t exist, and second because there’s no interest in it. But just because we do not have the means to predict exactly when, where, and in what style the glass will topple does not give us permission to declare it to be random. We just haven’t looked close enough. Not only so, but it is easier and more convenient to label certain things as random, thus relieving us of any responsibility for them.
Another example is the wind. It seems to be a stellar example of randomness. Even Jesus referred to the wind saying, “The wind blows wherever it pleases. You hear its sound, but you cannot tell where it comes from or where it is going (John 3:8).” But the wind is not random! There are very specific forces that cause the wind to go one way and then another. We can go pretty far these days to predict the wind—or at least windiness—but in the end we can go only so far. But, had we the De-Randomizing Computer, we could determine, down to the faintest brush of wind across our face, when, where, why, and how it would happen. Not only that, but the effect it will have on our skin at a molecular level, and even the emotions and thought processes it will inspire.
So it is in all that exists—nothing that happens is random. The De-Randomizing Computer is able to assess and calculate all the forces at work which bring their influence to bear upon any and every action and event. A splash of water that just reaches the furthest spot on a beach at high tide: the D-RC knows why; knows how. You say, “There is no De-Randomizing Computer.” Ok. But if we could harness the data and store it and call it up and so on, it could tell us how it all happens. Of course, even with A.I. becoming a religion, no intelligence will ever arise out of a machine—certainly not out of a human—that could master such command of the data.
The point is that we are asked to believe that the universe exists as a random accident—a foolish proposition and based only on the data we are currently capable of gathering—minus any allowance of the possibility of a De-Randomizing Computer.
T. D. Weldon wrote, “the problem of Leibniz May be formulated in the question, ‘What account can we offer of the universe which will avoid (a) the metaphysical, (b) the physical problems to which Cartesianism gives rise?’” (T.D. Weldon, Kant’s Critique if Pure Reason, (London: Oxford University, 1958), 19.)
In other words, “enlightened” thinkers and philosophers like Leibniz felt compelled to arrive at some satisfactory explanation of the universe without having to appeal to God or resorting to the miraculous. Without allowing for a De-Randomizing Computer.
Louis Dupre illustrates this in his definition of the Enlightenment when he writes:
As nominalist theologians began to attribute the origin of all things to the inscrutable will of God, they abrogated the link of intelligibility that connected the source of reality with its created effect. As a result, by the beginning of the modern age reality had ceased to be intrinsically intelligible and God no longer provided the rational justification of the world. Henceforth meaning was no longer embedded in the nature of things: it had to be imposed by the human mind (emphasis mine).
Dupre goes more in depth:
Christianity, for centuries the core of European culture, had left a tradition of values on which even secular intellectuals remained dependent long after having abandoned their faith. Most professed a belief in God even while adhering to a philosophy that emptied the idea of God of its traditional content. They continued to regard the idea as indispensable for morality, though morality had largely ceased to rely on it. Voltaire and Rousseau displayed an uncommon proselytic zeal for deist faith and aggressive hostility toward any kind of atheism. But this pragmatic use of the idea of God as foundational principle of the cosmos and as the basis of ethics had too little coherence to resist further deconstruction. Beginning with Hume and Diderot the suspicion grew that neither the origin and preservation of the cosmos nor the sanctioning of morality might require a personal God (emphasis mine).
When I wrote “The De-Randomizing Computer” I had not done any reading on chaos theory. The idea (as far as I presently understand it) is that even when we encounter something that seems random it only seems that way “for however accurate a measurement of the state at a time, a variation smaller than any it can detect may be responsible for a difference in the eventual outcome.” This is what is suggested in the axiomatic question:
Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?
But in “The De-Randomizing Computer,” my point was to assert that the things and events which we label “random” we do so because we lack the technology or the interest (or both) to look deep enough into the details of those events in order to discern the smallest pressures that bring about the events. You can imagine how excited I was, while reading Leonard Smith’s Very Short Introduction on “Chaos,” to discover Pierre-Simon LaPlace’s A Philosophical Essay on Probabilities. He wrote:
We ought then to regard the present state of the universe as the effect of its anterior state and as the cause of the one which is to follow. Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it—an intelligence sufficiently vast to submit these data to analysis—it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes.
I call LaPlace’s “intellect” the De-Randomizing Computer.
The De-Randomizing Computer is able to assess and calculate all the forces at work which bring their influence to bear upon any and every action and event. A splash of water that just reaches the furthest spot on a beach at high tide: the D-RC knows why; knows how. You say, “There is no De-Randomizing Computer.” Ok. But if we could harness the data and store it and call it up and so on, it could tell us how it all happens. Of course, even with A.I. becoming a religion, no intelligence will ever arise out of a machine—certainly not out of a human—that could master such command of the data.
 It necessarily follows that the human decision to place the ice water on the van was not random either. Were able to do so—had we a De-Randomizing Computer—we could go back and examine the minute thought process that led to the action. Not only so, but all of the forces and thought processes that brought the human actor to the place and time with a glass of ice water where he or she would be in place to perform the act of placing a glass of ice water on a van that was soon to begin moving down the road. Again, this in no way supports a postulation of predetermination. On the contrary, each human act is based on some level of reasoning even if that reasoning has been compromised.
 Louis Dupre, The Enlightenment and the Intellectual Foundations of Modern Culture, (New Haven: Yale University Press, 2004), 2.
 Ibid., 14-15.
 The reader will remember at the outset that the author intentionally does his own thinking first.
 Simon Blackburn, Oxford Dictionary of Philosophy, (Oxford: Oxford University Press, 2008), 59.
 Lorenz, Edward Norton, “Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?” Address given at the 139th Annual Meeting of the American Association for the Advancement of Science, Sheraton Park Hotel, Boston, Mass., December 29, 1972.
 Written in 1812. My copy is a translation of the sixth edition produced in 1902.