# QC 101 WORKSPACE

What is a Quantum Computer?
A Quantum Computer (QC) is a machine that uses the principles of Quantum Mechanics to do things which are practically impossible for a traditional (Classical) computer. We cover this in more detail here.

Whilst Classical computing power has historically been doubling every two years (Moore’s law), progress appears to be slowing and certain problems require computational power that, mathematically, cannot be achieved using Classical computers.

Quantum Mechanics is a fundamental theory in physics describing the properties of nature on an atomic scale. Quantum Mechanics has certain features which do not occur in standard or “classical” physics such as “Superposition” and “Entanglement”.

Utilising certain features of Quantum Mechanics, Quantum Computers will be able to solve problems that would take too long for Classical computers.

Whilst development is required, Quantum Computers are expected to be faster than Classical Computers for certain use cases.

Why are Quantum Mechanical behaviours important?
Superposition: If a system can be in state X or state Y, it can also be in a “mixture” of the two states. If we measure it, we see either X or Y, probabilistically.

Collapse. Any further measurements will give the same result

Entanglement. There exist systems of multiple parts which cannot be described only in terms of their constituent parts

Uncertainty: There are pairs of measurements where greater certainty of the outcome of one measurement implies greater uncertainty of the outcome of the other measurement

The basic idea behind quantum computing is to use these effects to our advantage.

Pretty verbatim from https://people.maths.bris.ac.uk/~csxam/teaching/history.pdf so needs updating

What is a Qubit?
If you read anything about quantum computers, you are bound to come across the term ‘qubit’. Every big development within creating quantum computers seems to revolve around adding more qubits, making them more stable and less ‘noisy’. But what does this mean?

Introduction to Qubits, part 1

What types of Quantum Computers are being made?
Qubits are the basic unit of quantum information which we explain in more detail above. These can be represented in a number of ways, just like bits. The different methods outlined below have their advantages and disadvantages which we outline in the following article.

There are many ways of implementing qubits and the list above is a selection of the most known rather than an attempt to be exhaustive. We have aimed to distill this into a digestible summary (building on BCG’s great table) which we will keep up to date as a live resource

[Qubit implementations page]

If you are an expert in the field and have suggestions please do not hesitate to get in touch with us at hello@thequantumdaily.com. We will refine this over time, and update for recent developments and breakthroughs.

What do quantum computers look like?
[Link to our article about the IBM design] [Link to our article on Bob Sutor – “steampunk chandalier”] [Explain why they look so large (cooling, nascent technology]
History of Quantum Computing
[Convert to article, provide summary and link back]

This piece aims to place the development of quantum computers in the wider context of the history of computing.

The computer in the form we recognise today stems from the work of an English mathematics professor Charles Babbage (1791-1871) who designed the Analytical Engine. Whilst he didn’t finish building the device, it has since been proven that it would have worked and its logical structure is broadly the same as used by modern computers.

Analytical engine. Source: Science Museum

Quantum Computing is seen by many as the next generation of computing which traces its modern history back to 1940, with the advent of the first vacuum tube based computers. There are many ways of dividing up the eras of computing but we think this is most instructive.

[Edit the version of the below for the main article to tie with timeline]

The development of computers has a rich history and the below is clearly a very high level overview to provide a wider contextual lens on the development of quantum computing.

Classical computers use bits (zeros and ones) to represent information. These bits were first represented with physical switches and relay logic in the first electro-mechanical computers.

Vacuum tubes (1940s – 1950s)

Vacuum tubes were used in the late 1940s as a way of controlling electric current to represent bits. These were unwieldy, unreliable devices that overheated and required replacing. The first general – purpose digital computer, the Electronic Numerical Integrator and Computer (ENIAC) was built in 1943. The computer had 18,000 vacuum tubes. Computers of this generation could only perform single task, and they had no operating system.

Transistors (1950s onwards)

This generation of computers used transistors instead of vacuum tubes which were more reliable. In 1951 the first computer for commercial use was introduced to the public; the Universal Automatic Computer (UNIVAC 1). In 1953 the International Business Machine (IBM) 650 and 700 series computers made were released and (relatively) widely used.

The first transistor ever assembled, invented by [xx] Bell Labs in 1947

Integrated circuits (1960s to 1970s)

The invention of integrated circuits enabled smaller and more powerful computers. An integrated circuit is a set of electronic circuits on one small flat piece of semiconductor material (typically silicon). These were first developed and improved in the late 1950s and through the 1960s.

The original integrated circuit of Jack Kilby

Microprocessors (1970s onwards)

In the 1970s the entire central processing unit started to be included on a single integrated circuit or chip and became known as microprocessors.

Photos, left: Intel; right: Computer History Museum

Whilst processing power has continued to advance at a rapid pace since the 1970s, much of the development of the core processors are iterations on top of the core technology developed. Today’s computers are a story of further abstraction including the development of software and middleware and miniaturization (with the development of smaller and smaller microprocessors).

Quantum Computers (present to future)

Quantum Mechanics as a branch of physics began with a set of scientific discoveries in the late 19th Century and has been in active development ever since. Most people will point to the 1980s as the start of physicists actively looking at computing with quantum systems.

The below is constantly evolving and we welcome edits and additions (please email hello@thequantumdaily.com)

1982: Richard Feynman lectures on the potential advantages of computing with quantum systems.

1985: David Deutsch publishes the idea of a “universal quantum computer”

1994: Peter Shor presents an algorithm that can efficiently find the factors of large numbers, significantly outperforming the best classical algorithm and theoretically putting the underpinning of modern encryption at risk (referred to now as Shor’s algorithm).

1996: Lov Grover presents an algorithm for quantum computers that would be more efficient for searching databases (referred to now as Grove’s search algorithm)

1996: Seth Lloyd proposes a quantum algorithm which can simulate quantum-mechanical systems

1999: D-Wave Systems founded by Geordie Rose

2000: Eddie Farhi at MIT develops idea for adiabatic quantum computing

2001: IBM and Stanford University publish the first implementation of Shor’s algorithm, factoring 15 into its prime factors on a 7-qubit processor.

2010: D-Wave One: first commercial quantum computer released (annealer)

2016: IBM makes quantum computing available on IBM Cloud

2019: Google claims the achievement of quantum supremacy. Quantum Supremacy was termed by John Preskill in 2012 to describe when quantum systems could perform tasks surpassing those in the classical world. [LINK]

What kinds of companies are working on Quantum Computing?
There are a number of companies already working on quantum computing.

[Integrate our company mapping content]

Firstly there are small start-ups:

https://thequantumdaily.com/category/start-ups/

There are also a number of large corporates

[Screenshot from our company map]
Spoiler title
Investment in quantum computing is from a mix of large listed corporations (e.g. IBM, Google, Microsoft).

These companies both have their own research institutions (e.g. Google AI Quantum) but also invest in other companies.

Venture capital (VC) does invest in quantum computing. That said, compared to other nascent “deep tech” VC has invested relatively little capital

Quantum Computing and the world of Venture Capital

How big is the Quantum Computing market?
It may be surprising to some, but market sizing is more of an art than a science. We cover some of the complexity in this article:

Quantum Computing Market Size – Superpositioned For Growth?

Still, many are excited about the potential opportunity….

McKinsey Forecasts Quantum Computing Market Could Reach \$1 trillion by 2035

What applications are Quantum Computers being used for?
Not yet

How do you invest in Quantum Computing?
We should state up front that writers for The Quantum Daily may hold securities discussed in content published on the website. Our articles are not a recommendation to buy or sell securities and you should be careful and do your research.

Most quantum computing companies are small, relatively young, and private. The short answer is that it’s not easy to invest purely in Quantum Computing. We provide more detail on this (including our view on ETFs) here:

Investing in Quantum Computing

Nonetheless there are a few companies that are linked to Quantum Computing and publicly traded. Approach with care!

Quantum Computing Incorporated has an interesting story which we cover here.
D-Wave is not directly publicly traded, but you can invest in it through a listed investor in private companies. You can find out more here.
Archer Materials doesn’t just focus on quantum computing but is small enough to be considered. We cover it here.