It is available on the WWW at http://www.zyvex.com/nanotech/convergent.html. The web version differs in some respects from the published version.
Convergent assembly covers a class of many different architectures. Drexler described one member of this class (see figure 14.4 from Nanosystems (Drexler, 1992)). The present proposal is simpler while retaining the desirable performance characteristics described in Nanosystems
We could assemble our eight subassemblies into a finished assembly by using one or more robotic arms (or other positional devices) in an assembly module (depicted abstractly in figure 1). The present paper is focused on architectural issues; the reader interested in details of how positional devices can be used in manufacturing is referred to other sources -- one example is A new family of six degree of freedom positional devices (Merkle, submitted to Nanotechnology).
This process can, of course, be continued. Figure 3 shows three stages of this process, in which 512 sub-sub-subassemblies are assembled in 16 0.25 meter assembly modules, making 64 sub-subassemblies; these 64 sub-subassemblies are assembled into 8 subassemblies in 4 0.5 meter assembly modules. Finally, the 8 subassemblies are assembled into the final product in the single 1.0 meter assembly module.
Again, the 0.25 meter assembly modules operate twice as fast as the 0.5 meter assembly modules, and four times as fast as the 1.0 meter assembly module. The 16 0.25 meter assembly modules can make 64 sub-subassemblies in the same time that the 4 0.5 meter assembly modules can make 8 subassemblies, which is the same amount of time that the single 1.0 meter assembly module takes to produce the finished product.
We can keep adding stages to this process until the inputs to the first stage are as small as we might find convenient. For the purposes of molecular manufacturing, it is convenient to assume that these initial inputs are ~one nanometer in size.
The claim that it requires 2^(N+1) x tau seconds for the Nth stage to produce a single product of size 2^N x lambda meters, made in item 3 above, can be shown by induction. We simply postulate that stage 1 requires 4 x tau seconds, thus providing the base for the induction. Assume that the hypothesis is true for stage N-1 and we wish to show that it is true for stage N. The assembly modules at stage N-1 will produce their first output in 2^N x tau seconds, at which point (by item 2) the assembly process can begin in stage N. The assembly process in stage N adds an additional 2^N x tau seconds (by item 2) before it produces its first output, so the total time before stage N produces an output is (2^N + 2^N) x tau seconds, or 2^(N+1) x tau seconds, as desired.
If we assume that stage N is unable to overlap any operations at all with stage N-1, i.e., we assume that stage N-1 must produce all eight subassemblies before stage N can even begin, then we will increase the time needed before the Nth stage can produce its first output to 3 x 2^N x tau seconds. This is only 50% longer.
A 30 stage system (N=30) should be able to produce a single meter-sized product in ~200 seconds. It should be able to produce a steady stream of meter-sized products at a rate of roughly one new product every 100 seconds.
To show that assembly module failures need not result in a major disruption of the manufacturing process, we consider a very simple (and inefficient) method of scheduling: we slow the manufacturing process by a factor of two. This implies that each assembly module is producing output 50% of the time and is idle 50% of the time. The total time to manufacture a product is doubled. With this assumption, the output of a failed assembly module could be entirely replaced by running an adjacent assembly module 100% of the time instead of 50% of the time. If two adjacent modules are used to replace the output of a failed assembly module, then they only need to operate 75% of the time.
Precise alignment will be particularly important if wires or other fine structures must be precisely joined. A variety of alignment methods are feasible.
Accurate alignment of parts in vacuum should also allow reactive surfaces to be bonded together with great precision. While a wide range of possibile surfaces exist, one surface that might be worth further investigation is the diamond (110) surface. This surface does not reconstruct, and so bringing together two diamond (110) surfaces in vacuum might be an effective method of directly bonding two diamond blocks to each other. Modeling of the energy released during this process and its possible adverse effects on the diamond lattice in the vicinity of the plane joining the two blocks would be useful. If the energy released during bonding causes significant damage, inclining the two surfaces with respect to each other at a small angle and then joining them slowly should create a more controlled release of energy at the line of joining.
Another issue that arises in the first few stages is the scaling of the control system. At the macroscopic scale, we are used to the idea that a computer is much smaller than the robotic arm which it controls. For the very smallest scales, this assumption is dubious. A simple 8-bit processor might occupy a cube about 100 nanometers on a side. If we assume the linear dimensions of each assembly module are only a few times larger than the size of the components to be assembled, then our 8-bit processor will dominate the volume requirements of the assembly module for the first few stages. If the first stage handles one nanometer parts, then our simple scaling laws suggest that the first stage assembly module will have linear dimensions perhaps three times that size (e.g., 3 nanometers). As our 8-bit control processor is over 30 times larger than this (in linear dimensions) the validity of our scaling laws is evidently open to serious question.
One approach to this problem would be to "special case" the first few stages. By employing hard automation and eliminating their flexibility, the control requirements could be substantially reduced or effectively eliminated. This seems like a good idea, as the advantages of hard automation over more flexible methods should be substantial. By combining a relatively modest number of standard 100 nanometer parts we could make a remarkably wide range of products.
If, however, we want a manufacturing process which can more closely approach the ultimate limit of being able to manufacture almost any product consistent with the laws of chemistry and physics, then we could adopt a more flexible approach: we could allow the first few stages to deviate from the simple scaling laws used for the rest of the stages. This is feasible because the first few stages occupy very little volume: if we continued to follow the same scaling laws as used for the later stages, the first few stages would be little more than a very thin "skin" at the beginning of the assembly process. We can deviate from the scaling laws adopted for larger stages by increasing the thickness of this "skin."
We could eliminate the first few stages and replace them with special assembly modules that (a) accept as inputs parts of about 1 nanometer but (b) produce as output parts of about 100 nanometers in size. This means that the volume occupied by the control system is roughly similar in volume to the rest of the special assembly module. Of course, this means that the first 6 to 7 "normal" stages have been eliminated. If we simply had one layer of special assembly modules, we would find that they were too slow to feed the rest of the system. As suggested above, we can increase the number of special assembly modules and let them occupy a larger volume.
We have been assuming that each assembly module would assemble about 8 parts while operating at a certain characteristic speed. Our special assembly modules will operate at the characteristic speed of stage 6 or 7, but will be handling (2^6)^3 / 8 = 32,000 to (2^7)^3 / 8 = 262,000 times as many parts. Another way of phrasing this is to point out that a cube 100 nanometers on a side is composed of 1,000,000 one-nanometer cubes. Thus, a cube 100 nanometers on a side has about 1,000,000 one-nanometer parts. As a consquence, our special assembly module will produce output parts at a speed roughly 100,000 times slower than it would produce output parts which had only 8 sub-parts in them. This implies that we need roughly 100,000 times more special assembly modules to produce parts at the same rate as would otherwise have been produced by the 7th stage modules if we had kept using our original scaling laws. As the 6th or 7th stage modules are a few hundred nanometers in size, we must increase the thickness of the first 6 to 7 stages to a few centimeters (100,000 times a few hundred nanometers).
It should be emphasized that this increase in volume is based on the assumption that we wish to retain the greatest possible flexibility in control over the first few stages by using one computer per assembly module.
Although each stage in our example system was a factor of two larger than the preceding stage, it is not clear that a factor of two is the most appropriate value for all (or even most) manufacturing applications: larger factors might prove more convenient for some operations. If we wish to bolt two parts together, the bolt might be an order of magnitude smaller than either part. This can be accomodated by passing the bolt through several stages of binary convergent assembly and finally using it in a stage which can accomodate much larger parts. More generally, small parts can, if necessary, be passed through several stages unchanged if this should prove convenient or necessary. The inefficiencies of this approach might be offset by making many small parts and passing them through succeeding stages in a batch (e.g., one might make a box with many bolts in it, rather than making a single bolt and passing it alone through several successively larger and increasingly underutilized stages).
Alternatively, it might be advantageous to increase the size of the succeeding stage by a factor larger than two. Complex patterns in which different stages or even different assembly modules in each stage increase in size by individually selected amounts might prove useful.
The greatest limitation of convergent assembly as presented here is the requirement that the product be relatively stiff. Easily deformed, liquid or gaseous components are most problematic. An inflated balloon (with thanks to Josh Hall for suggesting this example) does not appear well suited to convergent assembly. Many products, however, could be initially manufactured in a more rigid form (e.g., at lower temperature or with supports or scaffolding to provide the desired rigidity) and then later allowed to assume the desired less rigid state (by removing the scaffolding, warming, etc).
Convergent assembly is only one approach to making large objects from small objects. An oak tree does not use convergent assembly, but quite effectively makes something large starting with only a small seed. We have considered an approach in which relatively rigid "parts" have approximately similar X, Y and Z sizes. It would be possible to consider "parts" which are basically one dimensional. This might be useful in the manufacture of products similar to cables, in which long thin strands are woven together into a final product. Two dimensional "parts," e.g., sheets, could also be produced and then pressed together into a final product.
Convergent assembly has many clear advantages but seems ill-suited to the manufacture of flexible, liquid or gaseous products. It is not the only method of making large products from small parts. Further research into both convergent assembly and other approaches appears worthwhile.
This page is part of the nanotechnology web site.