Self replicating systems and low cost manufacturing
Ralph C. Merkle
Xerox PARC
3333 Coyote Hill Road
Palo Alto, CA 94304
merkle@xerox.com
Further information on self replicating systems is available at
http://www.zyvex.com/nanotech/selfRep.html
This paper was first published in
The Ultimate Limits of Fabrication and Measurement,
M.E. Welland, J.K. Gimzewski, eds.; Kluwer, Dordrecht, 1994, pages 25-32.
This electronic reprint is available on the web at
http://www.zyvex.com/nanotech/selfRepNATO.html,
and might differ from the printed version.
1. Introduction
Experimental[7] and theoretical[4, 6, 8, 21, 22] work both
support the idea that we will be able to fabricate precise
molecular structures (such as molecular logic elements) by
positioning individual atoms and molecules. However, even
the ability to make and interconnect a few atomically precise
logic elements will have limited impact when we must make and
interconnect at least trillions of logic elements to surpass
projected future lithographic capabilities.
The only demonstrated method of mass producing complex highly
precise structures at a low cost per kilogram is by
programmable self replicating systems as exemplified by
potatoes, wheat, wood, etc. (Electronics are not cheap: on a
per kilogram basis they are more than one hundred times as
expensive as gold). Unfortunately, it's not clear that such
biological methods will be able to produce the full range of
products we desire. Many of today's products are not made
of biological material and there is no particular reason to
believe this situation will change. Today's artificial
computers are not made out of protein because other materials
offer superior performance. Biological computers, despite
their many virtues, have high error rates, millisecond logic
delays and meter-per-second signal propagation speeds: they
are grossly uncompetitive.
While the design and development of non-biological
programmable self replicating systems suited to the
manufacture of complex high performance computer systems (as
well as a range of other high precision products) might at
first appear daunting, there has been much theoretical work
in this area. Starting with von Neumann's "universal
constructor" and "kinematic machine" in the 1950's and
continuing through the more recent proposals by Drexler for
an "assembler" this work describes a range of possible system
designs. Many of these systems are not overly complex by
today's engineering standards. More recent work suggests
that further simplifications are possible and that research
to determine the simplest and most easily manufacturable
programmable self replicating system should be pursued.
2. General manufacturing systems
Because biological self replicating systems are so
ubiquitous it is common to assume that their specific
properties and idiosyncratic features are an inherent
requirement for all self replicating systems. However,
programmable self replicating systems designed for
manufacturing need bear little resemblance to biological
systems. We shall call such non-biological systems
general manufacturing systems. In this article we
highlight the differences between biological systems and
general manufacturing systems.
Design concepts for general manufacturing systems have been
discussed for many years [10, 27, 28], and their utility in
manufacturing has been emphasized recently [4, 5, 6, 18].
These proposals draw on a body of work started by von
Neumann[27]. A wide range of methods have been
considered[10, particularly pages 190 et sequitur
"Theoretical Background"]. The von Neumann architecture for
a self replicating system is the ancestral and archetypal
proposal[24, 27].
2. The von Neumann architecture for a general
manufacturing system
Von Neumann's proposal consisted of two central elements: a
universal computer and a universal constructor
(see figure 1). The universal computer
contains a program that directs the behavior of the universal
constructor. The universal constructor, in turn, is used to
manufacture both another universal computer and another
universal constructor. Once construction is finished the
program contained in the original universal computer is
copied to the new universal computer and program execution
is started.
Von Neumann worked out the details for a constructor that
worked in a theoretical two-dimensional cellular automata
world (parts of his proposal have since been modeled
computationally[24]). The constructor had an arm which it
could move about and which could be used to change the state
of the cell at the tip of the arm. By progressively sweeping
the arm back and forth and changing the state of the cell at
the tip, it was possible to create "objects" consisting of
regions of the two-dimensional cellular automata world which
were fully specified by the program that controlled the
constructor.
While this solution demonstrates the theoretical validity of
the idea, von Neumann's kinematic constructor (which was not
worked out in such detail) has had perhaps a greater
influence, for it is a model of general manufacturing which
can more easily be adapted to the three-dimensional world in
which we live. The kinematic constructor was a robotic arm
which moved in three-space and which grasped parts from a sea
of parts around it. These parts were then assembled into
another kinematic constructor and its associated control
computer.
An important point to notice is that self replication, while
important, is not by itself an objective. A device able to
make copies of itself but unable to make anything else would
not be very valuable. Von Neumann's proposals centered
around the combination of a universal constructor, which
could make anything it was directed to make, and a universal
computer, which could compute anything it was directed to
compute. It is this ability to make any one of a broad range
of structures under flexible programmatic control that is of
value. The ability of the device to make copies of itself
is simply a means to achieve low cost, rather than an end in
itself.
3. Drexler's architecture for an assembler
Drexler's assembler follows the von Neumann kinematic
architecture, but is specialized for dealing with systems
made of atoms. The essential components in Drexler's
assembler are shown in figure 2. The
emphasis here (in contrast to von Neumann's proposal) is on
small size. The computer and constructor both shrink to the
molecular scale, while the constructor takes on additional
detail consistent with the desire to manipulate molecular
structures with atomic precision. The molecular constructor
has two major subsystems: (1) a positional capability and
(2) the tip chemistry.
The positional capability might be provided by one or more
small robotic arms, or alternatively might be provided by any
one of a wide range of devices that provide positional
control[9, 15]. The emphasis, though, is on a positional
device that is very small in scale: perhaps 0.1 microns (100
nanometers) or so in size.
The tip chemistry is logically similar to the ability of the
von Neumann universal constructor to alter the state of a
cell at the tip of the arm, but now the change in "state"
corresponds to a change in molecular structure. That is, we
must specify a set of well defined chemical reactions that
take place at the tip of the arm, and this set must be
sufficient to allow the synthesis of the structures of
interest.
It is worth noting that current methods in computational
chemistry are sufficient to model the kinds of structures
that will appear in a broad class of molecular machines,
including all of the structures and reactions needed for some
assemblers[16, 20, 21, 22]
4. The Broadcast Architecture
In the von Neumann architecture, Drexler's assembler and in
living systems the complete set of plans for the system are
carried internally in some sort of memory. This is not a
logical necessity in a general manufacturing system. If we
separate the "constructor" from the "computer," and allow
many individual constructors to receive broadcast
instructions from a single central computer then each
constructor need not remember the plans for what it is going
to construct: it can simply be told what to do as it does it
(see figure 3).
This approach not
only eliminates the requirement for a central repository of
plans within the constructor (which is now the component that
self replicates), it can also eliminate almost all of the
mechanisms involved in decoding and interpreting those
plans. The advantages of the broadcast architecture are: (1)
it reduces the size and complexity of the self replicating
component, (2) it allows the self replicating component to
be rapidly redirected to build something novel, and (3) If
the central computer is macroscopic and under our direct
control, the broadcast architecture is inherently safe in
that the individual constructors lack sufficient capability
to function autonomously[6, 18].
This general approach is similar to that taken in the
Connection Machine[14], in which a single complex central
processor broadcasts instructions to a large number of very
simple processors. Storing the program, decoding
instructions, and other common activities are the
responsibility of the single central processor; while the
large number of small processors need only interpret a small
set of very simple instructions.
It is interesting to view the cell as using the broadcast
architecture with the nucleus as the "central computer"
broadcasting instructions in the form of mRNA to perhaps
millions[29] of ribosomes.
Drexler has proposed immersing the constructor in a liquid
or gas capable of transmitting pressure changes and using
pressure sensitive ratchets to control the motions of the
constructor[6]. If each pressure sensitive ratchet has a
distinct pressure threshold (so that pressure transitions
around the threshold cause the ratchet to cycle through a
sequence of steps while pressure changes that remain above
or below the threshold cause the ratchet to remain
inoperative) then it is possible to address individual
ratchets simply by adjusting the pressure of the surrounding
fluid. This greatly reduces the complexity of the
instruction decoding hardware.
5. Differences between biological systems and general
manufacturing systems
General manufacturing systems are likely to be very different
from biological systems. First, general manufacturing
systems aim to produce products with the best achievable
performance and capabilities, e.g., which approach the
fundamental limits imposed by physics and chemistry.
Biological systems, based largely on protein, are unlikely
to achieve this objective. Second, it seems likely that the
indirect and circuitous routes by which biological systems
control three dimensional structure (e.g., the protein
folding problem, self assembly to control the position of
molecular components, etc.) will be largely replaced by
simpler and more direct methods that use positional control.
Third, the error rates in biological systems are relatively
high. It should be feasible to substantially reduce these
error rates and produce systems and products with superior
reliability, performance, materials properties, etc.
Fourth, biological systems are not designed to allow rapid
reprogramming. A potato cannot readily be reprogrammed to
make a steak. General manufacturing systems should be able
to respond rapidly to changing requirements by changing what
is manufactured. Fifth and last (at least in this paper),
we want general manufacturing systems to be free of
extraordinary risks.
6. More than proteins
The greater the diversity of products a manufacturing system
can make, the more valuable it is. If it can only make
biological products, its value is reduced. Consider the
problem of building high performance computers. While
biological computers (e.g., the human brain and nervous
system) have many fine properties (and utilize an
architecture and software which is clearly greatly superior
in many respects to anything currently available), they are
based on fundamental components (synapses, neurons) which
have truly atrocious performance. Logic elements with
millisecond delays and meter-per-second signal propagation
velocities are grossly unacceptable in today's computers,
much less in future systems. (Note that the poor performance
of the underlying hardware increases our respect for an
architecture and software which manage to wring such amazing
feats from such slow and unreliable components).
It seems certain that future computers will have the smallest
possible logic elements, built with the highest possible
precision and at the lowest possible cost. This should
result in logic elements which are molecular in both size and
precision, assembled in complex and idiosyncratic patterns.
A more plausible candidate than proteins for future
computational hardware is semiconductor devices conceptually
similar to today's but made with vastly greater precision
(individual dopant atoms placed deliberately at specific
lattice sites, for example) and which extend fully into three
dimensions. Diamond, with its wide band gap, excellent
thermal conductivity, large breakdown field and high
mobility would provide an excellent semiconductor for such
future devices[12]. Molecular-sized logic elements packed
densely in three dimensions will produce significant heat;
an often overlooked problem in molecular logic proposals.
This problem can be dealt with by using thermodynamically
reversible logic[19 and references therein].
Biological structural materials are also far from ideal.
Diamond has a strength to weight ratio over 50 times that of
steel, and properly engineered materials in the future should
be able to approach this strength and yet resist fracturing.
Nothing in biology approaches this.
The chemical reactions involved in the synthesis of diamond
today are very different from those involved in making
proteins[1, 2, 11]. Reactions proposed for the atomically
precise synthesis of diamondoid structures involve highly
reactive compounds in an inert environment[6, 21, 22]; a very
different approach than that taken in biological systems.
For strength and stiffness, materials using boron, carbon and
nitrogen are superior[3]. Diamond is also an excellent
candidate material for future electronic devices.
If we limit general manufacturing systems to proteins we will
exclude a vast range of very valuable products. We will
almost certainly wish to make diamond and diamondoid
products. This implies the use of reactions and conditions
very different from what we see in biology today.
7. Positional Control
Besides using non-biological materials, general
manufacturing systems are likely to make extensive use of
positional control, i.e., the ability to position molecular
components appropriately by using molecular positional
devices. The Stewart platform[9, 13, 25, 26] seems ideal for
providing positional control at the molecular level. The
basic Stewart platform is an octahedral structure in which
one triangular face is designated the "base," the opposing
face is designated the "platform," and six adjustable-length
struts (which lie along the six edges of the octahedron which
are between the base and platform) control the position of
the platform. Within an allowed range of motion it provides
complete control over the position and orientation of the
platform with respect to the base; it provides high stiffness
(critical to positional control at the molecular scale); all
struts are either in pure tension or pure compression; and
it is a simple design. This simplicity suggests that it
might be feasible to self-assemble a Stewart platform (e.g.,
self assemble an octahedron in which the lengths of the
struts can be controlled: either statically at the time of
self assembly or dynamically in response to an external
signal).
The use of positional control in general manufacturing
systems is consistent both with the tradition of kinematic
devices seen in theoretical proposals[10, 27], with
experience from today's macroscopic manufacturing[23], and
with theoretical proposals for molecular manufacturing[6,
21].
While biological systems make extensive use of self assembly
at the molecular level, positional control is dominant in
today's factories (although vibratory bowl feeders[23] are
in essence the macroscopic application of principles more
commonly associated with self assembly in the face of thermal
noise at the molecular level). The application of positional
control at the molecular level appears feasible both
theoretically and experimentally, and offers striking
advantages in the manufacturing process. The reader is
invited to consider the difficulties involved in
manufacturing a car if positional control were prohibited in
the manufacturing process. We can reasonably expect that the
application of positional control to molecular synthesis
will greatly extend the range of things that can be made[21].
It will also result in artificial systems that are very
different from the biological systems with which we are
familiar.
8. Reduced error rates
Another likely difference is in the error rates tolerated
during assembly. The achievable error rate limits the range
of options that can be pursued and in particular limits the
feasible module size. (A "module" is here viewed as an
assemblage which has a relatively high probability of being
manufactured correctly and of functioning correctly, and
hence can be discarded in its entirety if there is any
failure anywhere within the module). When error rates are
high, the module size must be small. If the module size were
large in the face of high error rates, the yield of correctly
working modules would be unacceptably low and overall system
function would be compromised. When error rates are low, the
module size can be large. Protein synthesis has an error
rate of roughly 1 in 10,000 [29] and we do not find proteins
with tens of thousands of amino acids. "Typical" proteins
have hundreds or perhaps a few thousand amino acids.
There are well known methods of assembling unreliable logic
elements into reliable computational systems. However,
these methods result in reduced system performance and
increased bulk. Experience with semiconductor devices
supports the idea that the primary objective in the
manufacturing process is to reduce the error rate to the
lowest possible level, and only when further reductions are
infeasible should redundant logic elements (or other error-
tolerant design approaches) be adopted.
Applying this philosophy to general manufacturing systems,
we should first determine the lowest achievable error rate
and then design modules of the largest possible size using
the simplest and most efficient designs. It seems difficult
to reduce error rates at the molecular level substantially
below the levels caused by radiation[6]. Other error
mechanisms (e.g., thermal, photochemical) can be reduced to
levels that are below the error rate caused by radiation
damage[6] by using appropriate designs. This conclusion
leads to feasible molecular module sizes of tens of billions
of atoms with MTBF's of many decades (where an "error" is
defined to occur if even a single atom is out of place). This
is in sharp contrast to the error rates and module sizes
adopted in biological systems. We can reasonably expect that
systems that take advantage of these low errors rates will
involve designs and system functions that are very different
from biological systems.
9. Ease of reprogramming
General manufacturing systems should be so designed that they
can readily change what they are manufacturing. While
spraying mRNA over plants to cause the rapid manufacture of
the desired protein has been proposed, biological systems by
and large lack the ability to accept external instructions
about what is to be built. In general manufacturing systems,
by contrast, we will wish to be able to redirect the
manufacturing process quickly and rapidly in response to
changing demand.
10. Risks of self replicating systems
Self replicating systems, like other systems, might fail to
work correctly and as a consequence cause damage. Unlike
ordinary systems, they can theoretically inflict an
unlimited amount of damage. They could theoretically, for
example, replicate unchecked and destroy the planet[5]. To
be acceptable, any proposed general manufacturing system
must be inherently safe; i.e., not only must the system as
designed not pose any extraordinary risks, this property must
be retained even in the face of accidental design errors,
errors in handling or transmitting the instructions, etc. It
must be robustly safe. There are reasons for believing that
general manufacturing systems will lack the marvelous
flexibility and adaptability that is characteristic of
living organisms and will suffer from the same rigid and
inflexible responses to even small changes in the environment
that are so common in other machinery[17]. This
inflexibility is economically beneficial for it simplifies
design and increases efficiency and economy (flexible
systems able to adapt to a wide range of environments are
imperfectly adapted and less efficient in any specific
environment than less flexible systems narrowly tuned to that
particular environment). Inflexibility is also desirable as
a safety feature, for inflexible systems will fail in an
uncontrolled environment.
11. Conclusion
General purpose programmable self replicating systems
designed for manufacturing are likely to differ dramatically
from biological self replicating systems. Both ducks and
747's fly, but they are very different. Some of the likely
differences: (1) general manufacturing systems will employ
non-biological reactions to make products of greater
strength and superior electronic performance (diamondoid
structures being a primary candidate in both cases). (2)
They will take full advantage of the principles of positional
control as exemplified by devices such as robotic arms,
Stewart platforms and the like. (3) They will use basic
operations that have a much higher reliability than those
used in biological systems, and so will be able to assemble
larger modules (more atoms) with good confidence that they
are error free. This will allow the exploitation of module
designs that are more efficient and compact than anything
that could be contemplated with the relatively high error
rates seen in biological systems. (4) They will be readily
reprogrammable. (5) When designed for manufacturing (and
not deliberately designed to be dangerous, as in weapons)
they will be unable to replicate outside of a very specific
and unnatural environment, making them inherently safe.
12. References
- 1. Angus, J.C., Argoitia, A., Gat, R., Li, Z., Sunkara,
M., Wang, L. and Wang Y. (1993) Chemical vapour
deposition of diamond, Phil. Trans. R. Soc. Lond.
A, 342, pp. 195-208.
- 2. Butler, J.E., and Woodin, R. (1993) Thin film diamond
growth mechanisms, Phil. Trans. R. Soc. Lond. A,
342 pp. 209-224.
- 3. Cohen, M.L. (1993) Predicting Useful Materials,
Science 261, pp. 307-308.
- 4. Drexler, K.E. (1981) Molecular Engineering: an
approach to the development of general capabilities for
molecular manipulation, Proc. National Academy of
Sciences USA, 78, pp. 5275-8.
- 5. Drexler, K. E. (1986) Engines of Creation,
Doubleday.
- 6. Drexler, K.E., (1992) Nanosystems: molecular
machinery, manufacturing, and computation,
Wiley&Sons.
- 7. Eigler, D.M., and Schweizer, E.K. (1990) Positioning
single atoms with a scanning tunnelling microscope,
Nature 344, 524-526.
- 8. Feynman, R.P. (1960) There's Plenty of Room at the
Bottom, Caltech's Engineering and Science, February.
- 9. Fitzgerald, J.M. and Lewis, F.L. (1993) Evaluating the
Stewart Platform for Manufacturing, Robotics
Today, 6, pp. 1-3.
- 10. Freitas, R.A., and Gilbreath, W.P., (1980)
Advanced Automation for Space Missions, National
Technical Information Service N83-15348.
- 11. Frenklach, M., and Wang, H. (1991-I) Detailed surface
and gas-phase chemical kinetics of diamond deposition,
Physical Review B, 43, pp. 1520-1545.
- 12. Geis, M.W., and Angus, J.C., (1992) Diamond Film
Semiconductors, Scientific American, October pp.
84.
- 13. Gough, V.E., and Whitehall, S.G. (1962) Universal
Tyre Test Machine, Proc. 9th Int. Tech Congress
F.I.S.I.T.A., pp. 117.
- 14. Hillis, D. (1986) The Connection Machine, MIT
Press.
- 15. Klafter, R.D., Chmielewski, T.A., and Negin M. (1989)
Robotic Engineering: an Integrated Approach,
Prentice Hall.
- 16. Merkle, R.C. (1991) Computational Nanotechnology,
Nanotechnology, 2, pp. 134-141.
- 17. Merkle, R.C. (1992) Risk Assessment, in Crandall,
B.C. and Lewis, J. (eds) Nanotechnology: Research and
Perspectives, MIT press, pp. 287-294
- 18. Merkle, R.C. (1992) Self replicating systems and
molecular manufacturing, Journal of the British
Interplanetary Society, 45, pp. 407-413.
- 19. Merkle, R.C. (1993) Reversible Electronic Logic Using
Switches, Nanotechnology, 4, pp. 21-40.
- 20. Merkle, R.C. (1993) A Proof About Molecular Bearings,
Nanotechnology, 4 pp. 86-90.
- 21. Merkle, R.C. (1993) Molecular Manufacturing: Adding
Positional Control to Chemical Synthesis, Chemical
Design Automation News, 8, No. 9&10, pp. 1.
- 22. Musgrave, C.B., Perry, J.K., Merkle, R.C., and
Goddard, W.A. (1992) Theoretical Studies of a Hydrogen
Abstraction Tool for Nanotechnology,
Nanotechnology, 2, pp. 187-195.
- 23. Riley, F.J. (1983) Assembly Automation,
Industrial Press.
- 24. Signorini, J. (1989) How a SIMD machine can implement
a complex cellular automaton? [sic] A case study: von
Neumann's 29-state cellular automaton, in
Proceedings Supercomputing `89, ACM Press.
- 25. Stewart, D (1965-66) A Platform with Six Degrees of
Freedom, The Institution of Mechanical Engineers,
Proceedings 1965-66, 180 Part 1, No. 15, pp. 371-
386.
- 26. Stoughton, R.S., and Arai, T. (1993) A Modified
Stewart Platform Manipulator with Improved Dexterity,
IEEE Transactions on Robotics and Automation, 9,
pp. 166-173.
- 27. von Neumann, J. and Burks, A.W. (1966) Theory of
Self-Reproducing Automata, University of Illinois
Press.
- 28. von Tiesenhausen, G., and Darbro, W.A. (1980) Self-
Replicating Systems - A Systems Engineering Approach,
NASA technical memorandum TM-78304, Marshall Space
Flight Center, Alabama.
- 29. Watson, J.D., Hopkins, N.H., Roberts, J.W., Steitz,
J. A., Weiner, A.M. (1987) Molecular Biology of the
Gene, Fourth Edition, Benjamin/Cummings.
This
page is part of the
nanotechnology web site.