Propagating Biocomputers using Turing Machines
Copyright © 2004, David A. Epstein.
All Rights Reserved.

July 1, 2004

A Turing Machine is a computer that can compute anything that is computable. That's a lot of computing! The formal Turing principle states that if something is computable, it can be computed by a Turing Machine. But what is this machine? It's a computer that solves problems by computing along a tape of infinite length (in theory). The tape is divided into squares, and the machine head reads the value of a given square. Based on instructions fed to it, which include computation states, alpha/numeric values, and directional movements, the value is replaced with another one. Then the machine head moves to the direction indicated by the executed instruction. For example, if the machine, let's say, is in state A1, and reads a value of 0, then it will replace that value with a 1 and move to the left; if in state B1, it would do likewise, but move to the right. Through a series of such instructions, any computable problem can be solved by this method.

How do we apply this to biocomputing, or more to the point, why would we want to? There are many reasons to apply it, but I will only be addressing two here:
1) to discover what makes a cell healthy;
2) to combat invasive diseases.
The two go hand-in-hand no doubt. The aim is thus two fold:
1) if a cell is already healthy, leave it alone but provide it with an early-warning disease detection and prevention system;
2) if a cell is sick, provide it with the means to get healthy again.

These are pretty lofty goals, and I'm not going to try to solve them with Turing Machines! What I will suggest is that IF it's possible to keep the body healthy, and/or fight diseases (i.e. cancer), then implementing these Turing Machines might be one approach, and perhaps the best approach available. This, of course, is contingent upon the problem (how to maintain physical health or the fight against diseases) being properly modelled and computed. If the problem can't be, then it certainly can't be solved with Turing Machines, and there's no reason to continue with this discussion. I would contend that the problem is solvable, though I can't prove this assertion.

In addition, I'm not going to give a dissertation about cellular or molecular biology. Although I have some knowledge in this area, I certainly don't know enough to talk about it here. My ideas are very general and I propose them mainly to spur others who are more intelligent and have a better understanding of these issues. There can be no question that others have been thinking along similar lines.

So why did I get excited about this? It was a recent development of a biocomputer by Professor Ehud Shapiro (who I took a class from at the Weizmann Institute of Science in Israel) and his trusted team. When I heard about this, I immediately thought that a new computer platform had been developed. While not feasible in the present, the biocomputer will eventually need an operating system and software applications. And the first application I thought of was a molecular Turing Machine. But keep it quiet, OK? We wouldn't want our friends up in Redmond to find out about it. ; )

And now for the general algorithm. Let's not beat around the bush!

The Algorithm (in pseudo code of course):

1) Propagate a seed multiprocessor Turing Machine to a selected cell in the body. All of the cell is treated as a sinewy, winding continuous Turingesque line.
2) Using one of the biocomputer's processors, calculate all the positive attributes of the cell: genetic traits, survivability, fitness, mitosis, replication accuracy, immunity, metabolism, and the actions of anti-viral agents. Included in this positivist analysis should be the factoring of attribute correlations (e.g.. the relationship between genetics and fitness). Each of these attributes is computed by analyzing a sequence of cellular "squares" by executing a series of instructions. A series of Turing "states" corresponds to a specific attribute.
3) Using another of the biocomputer's processors, calculate all the negative attributes of the cell, or potential weaknesses: Infestation by a disease, fatigue, degeneration, collapse, aging, etc. Use similar computation techniques described above.
4) Relay the results of #2 and #3 to another processor to run simulations of a battle between #2 and #3 where #2 is ultimately victorious. Discard all other results. The objective is to run Turing simulations where victories are computed faster than defeats. Results from previous trials can be used to condition the system to learn from faulty approaches, correct mistakes & miscalculations, and assist in reformulating the problem to account for newly introduced obstacles, defense arrangements, counterattacks, and other factors that will necessitate a change in strategy. In the end, the positive attributes must outweigh the negative ones. But the problem must always be normalized to mirror realistic cellular scenarios.
5) Concurrently spawn another Turing Machine to propagate to an adjacent cell.
6) The new machine performs similar computations. The main difference is that it leverages from step 7:
7) A network connection is established between the machines. Three things are done here:
  • The two machines share results with each other. Each can help the other perform the exhaustive step #4. In fact, each can learn from the simulations run by the other and discover algorithmic shortcuts to run their respective simulations.
  • The network connection learns about how cells share and propagate genetic and biochemical information between them. Like computer networks, they work with designated protocols. Discovering the protocols of beneficial and harmful cellular "communication" and "transmission" is the key component of this step.
  • Network simulations are run, similar to step #4, but run across the network.
8) Each new Turning Machine initiates a request to spawn another (the idea of self-replicating machines was proposed by the great mathematician John Von Neumann). Think of each new spawned machine as a new square to be analyzed by the original Turning Machine. So each new machine is actually one instruction executed by the previous one. This relationship is implemented recursively: The original Turing Machine (TM1) has a master set of instructions for spawning another machine (TM2). All the states and read/write actions are preserved and recorded, and the master set of instructions is passed to TM2. TM2 in turn implements these master instructions to spawn TM3. TM2 is the "2nd square" of TM1's spawning computations, while TM3 is the "3rd square" of TM1's spawning computations AND the "2nd square" of TM2's computations.

In other words, compute, spawn, and keep track of the internal states. The Java servlet model is ideal to handle multiple requests and state preservation (which is absolutely essential for this propagation scenario to work). I see that model as a useful tool to help us solve the above problems.

There are other ideas to apply here: neural networks, genetic algorithms, cellular autonoma, information theory (a la Claude Shannon), holographic encoding, chaos theory, nanotechnology, and maybe even quantum computers. Also we need to consider the limitations and faultiness of this model. I'm just too tired right now to discuss these. Good night for now.