The basic unit of information in a traditional computer is a bit, which has a value of 0 or 1. This basic unit allows computer processor to perform logical operations which are in turn carried out by electronic circuits. Based on these basic logical structures, algorithmic models are devised which allows us to perform sophisticated computations. But what’s on top of computations? Its randomness. Computer games, cryptography and computational geometry are unthinkable without randomness. and distributed computation is not deterministic. The future of computing, based on the root of randomness, would expose possibilities in the following areas.

## Quantum Computing

In quantum computer, the basic unit of information is called a quantum bit, or qubit. Qubit has values of 0 and 1, but it can also exist in an intermediate state, which is like a superposition of the two classic states 0 and 1. The biggest difference between quantum computer and traditional computers are, quantum computers can perform some computations more efficiently than a traditional computer. Some problems which currently has no efficient algorithms known on traditional computers can be solved in an efficient manner with quantum computers. The popular public-key cryptosystems such as RSA whose security is based on the difficulty of factoring large integers would be rendered obsolete with the advent of quantum computers.

As with current algorithmic approach, the way to search for a given value in an unordered list is to search each item, one-by-one. Given n items, the Big O notation is thus n. With quantum computers, given correct algorithms, this time complexity could be reduced to square root n in the worst case.

However, building a quantum computer is a challenge because qubits are unstable. Even successful prototypes exist in only few bits. With adequate research, i believe algorithms will be completely rewritten with the advent of quantum computers.

## DNA Computing

DNA(deoxyribonucleic acid) encodes genetic information as a string of chemicals, called bases, typically denoted A,C,G,and T. In DNA computing, bits are replaced by the bases A,C,G and T. Strands of DNA represent data, just as strings of bits represent data in a traditional computer. A DNA computation begins by generating stands of DNA to represent possible solutions to a particular problem. After the computation ends, any DNA strands that remain represents a solution to the problem.

Currently, the United States Data Encryption Standard(DES) are used for the secure exchange of information such as banking and national security matters. It relies on one of 72 quadrillion keys to encode a message. The security results from the difficulty of discovering which key was used. Checking all possible keys is impossible using a traditional computer because it would take too long. However, because DNA computing derives its power from massive parallelism, it could check all 72 quadrillion keys concurrently. Estimates indicate that the computation could take as little as two hours.

There are fears that security would be non-existent with these kinds of ‘super-computers’. But technology always comes as a double edged sword, they may well allow us to develop algorithms on the next spectrum which would again render to a cat and mouse chase. I’m excited about the computing future, yet worried how humans would use it for destructive means.