# Quantum Feature Map¶

Many classical machine learning methods re-express their input data in a
different space to make it easier to work with, or because the new space may
have some convenient properties. A common example is support vector machines, which classify data
using a linear hyperplane. A linear hyperplane works well when the data is
already linearly separable in the original space, however this is unlikely to be
true for many data sets. To work around this it may be possible to transform the
data into a new space where it is linear by way of a *feature map*.

More formally, let \(\cal{X}\) be a set of input data. A feature map \(\phi\) is a function that acts as \(\phi : \cal{X} \rightarrow \cal{F}\) where \(\cal{F}\) is the feature space. The outputs of the map on the individual data points, \(\phi(x)\) for all \(x \in \cal{X}\), are called feature vectors.

In general \(\cal{F}\) is just a vector space — a *quantum feature map*
\(\phi : \cal{X} \rightarrow \cal{F}\) is a feature map where the vector
space \(\cal{F}\) is a Hilbert space and the feature vectors are quantum
states. The map transforms \(x \rightarrow |\phi(x)\rangle\) by way of a
unitary transformation \(U_{\phi}(x)\), which is typically a
variational circuit whose parameters
depend on the input data.

For some more detailed examples of quantum feature maps, see the entry for quantum embeddings, and the key references Schuld & Killoran (2018), and Havlicek et al. (2018).

## Downloads

## Related tutorials