matchcake.utils package¶
Submodules¶
matchcake.utils.constants module¶
matchcake.utils.cuda module¶
- matchcake.utils.cuda.is_cuda_available(enable_warnings: bool = False, throw_error: bool = False) bool ¶
matchcake.utils.majorana module¶
- class matchcake.utils.majorana.MajoranaGetter(n: int, maxsize=None)¶
Bases:
object
Class for caching the Majorana matrices. The Majorana matrices are computed using the function
get_majorana()
. The matrices are cached in a dictionary for faster computation.- Parameters:
n (int) – Number of particles
- __call__(i: int, j: int | None = None) ndarray ¶
Call self as a function.
- __init__(n: int, maxsize=None)¶
- cache_item(key: Any, value: Any) Any ¶
Cache an item. If the cache is full, the oldest item is removed.
- Parameters:
key – The key of the item.
value – The value of the item.
- Returns:
The removed item.
- clear_cache()¶
- matchcake.utils.majorana.get_majorana(i: int, n: int) ndarray ¶
Get the Majorana matrix defined as
\[c_{2k+1} = Z^{\otimes k} \otimes X \otimes I^{\otimes n-k-1}\]for odd \(i\) and
\[c_{2k} = Z^{\otimes k} \otimes Y \otimes I^{\otimes n-k-1}\]for even \(i\), where \(Z\) is the Pauli Z matrix, \(I\) is the identity matrix, \(X\) is the Pauli X matrix, \(\otimes\) is the Kronecker product, \(k\) is the index of the Majorana operator and \(n\) is the number of particles.
- Note:
The index \(i\) starts from 0.
- Parameters:
i (int) – Index of the Majorana operator
n (int) – Number of particles
- Returns:
Majorana matrix
- Return type:
np.ndarray
- matchcake.utils.majorana.get_majorana_pair(i: int, j: int, n: int) ndarray ¶
Get the Majorana pair defined as
\[c_{2k+1} c_{2l+1} = Z^{\otimes k+l} \otimes X \otimes I^{\otimes n-k-l-1}\]for odd \(i\) and \(j\) and
\[c_{2k} c_{2l} = Z^{\otimes k+l} \otimes Y \otimes I^{\otimes n-k-l-1}\]for even \(i\) and \(j\), where \(Z\) is the Pauli Z matrix, \(I\) is the identity matrix, \(X\) is the Pauli X matrix, \(\otimes\) is the Kronecker product, \(k\) and \(l\) are the indices of the Majorana operators and \(n\) is the number of particles.
- Note:
The indices \(i\) and \(j\) start from 0.
- Parameters:
i (int) – Index of the first Majorana operator
j (int) – Index of the second Majorana operator
n (int) – Number of particles
- Returns:
Majorana pair
- Return type:
np.ndarray
- matchcake.utils.majorana.get_majorana_pauli_list(i: int, n: int) List[ndarray] ¶
Get the list of Pauli matrices for the computation of the Majorana operator \(c_i\) defined as
\[c_{2k+1} = Z^{\otimes k} \otimes X \otimes I^{\otimes n-k-1}\]for odd \(i\) and
\[c_{2k} = Z^{\otimes k} \otimes Y \otimes I^{\otimes n-k-1}\]for even \(i\), where \(Z\) is the Pauli Z matrix, \(I\) is the identity matrix, \(X\) is the Pauli X matrix, \(\otimes\) is the Kronecker product, \(k\) is the index of the Majorana operator and \(n\) is the number of particles.
- Parameters:
i (int) – Index of the Majorana operator
n (int) – Number of particles
- Returns:
List of Pauli matrices
- Return type:
List[np.ndarray]
- matchcake.utils.majorana.get_majorana_pauli_string(i: int, n: int, join_char='⊗') str ¶
matchcake.utils.math module¶
- class matchcake.utils.math.TorchLogm(*args, **kwargs)¶
Bases:
Function
- static backward(ctx, G)¶
Define a formula for differentiating the operation with backward mode automatic differentiation.
This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the
vjp
function.)It must accept a context
ctx
as the first argument, followed by as many outputs as theforward()
returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computed w.r.t. the output.
- static forward(ctx, A)¶
Define the forward of the custom autograd Function.
This function is to be overridden by all subclasses. There are two ways to define forward:
Usage 1 (Combined forward and ctx):
@staticmethod def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any: pass
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
See combining-forward-context for more details
Usage 2 (Separate forward and ctx):
@staticmethod def forward(*args: Any, **kwargs: Any) -> Any: pass @staticmethod def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None: pass
The forward no longer accepts a ctx argument.
Instead, you must also override the
torch.autograd.Function.setup_context()
staticmethod to handle setting up thectx
object.output
is the output of the forward,inputs
are a Tuple of inputs to the forward.See extending-autograd for more details
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward()
if they are intended to be used inbackward
(equivalently,vjp
) orctx.save_for_forward()
if they are intended to be used for injvp
.
- matchcake.utils.math.astensor(tensor, like=None, **kwargs)¶
Convert input to a tensor.
- Parameters:
tensor (Any) – Input tensor.
like (str, optional) – The desired tensor framework to use.
kwargs – Additional keyword arguments that are passed to the tensor framework.
- Returns:
The tensor.
- Return type:
Any
- matchcake.utils.math.cast_to_complex(__inputs)¶
Cast the inputs to complex numbers.
- Parameters:
__inputs – Inputs to cast
- Returns:
Inputs casted to complex numbers
- matchcake.utils.math.check_is_unitary(tensor: Any)¶
- matchcake.utils.math.circuit_matmul(first_matrix: Any, second_matrix: Any, direction: MatmulDirectionType = MatmulDirectionType.LR, operator: Literal['einsum', 'matmul', '@'] = '@') Any ¶
Perform a matrix multiplication of two matrices with the given direction.
- Parameters:
first_matrix (Any) – First matrix.
second_matrix (Any) – Second matrix.
direction (Literal["rl", "lr"]) – Direction of the matrix multiplication. “rl” for right to left and “lr” for left to right. That means the result will be first_matrix @ second_matrix if direction is “rl” and second_matrix @ first_matrix
operator (Literal["einsum", "matmul", "@"]) – Operator to use for the matrix multiplication. “einsum” for einsum, “matmul” for matmul, “@” for __matmul__.
- Returns:
Result of the matrix multiplication.
- Return type:
Any
- matchcake.utils.math.convert_1d_to_2d_indexes(indexes: Iterable[int], n_rows: int | None = None) ndarray ¶
- matchcake.utils.math.convert_2d_to_1d_indexes(indexes: Iterable[Tuple[int, int]], n_rows: int | None = None) ndarray ¶
- matchcake.utils.math.convert_and_cast_like(tensor1, tensor2)¶
Convert and cast the tensor1 to the same type as tensor2.
- Parameters:
tensor1 (Any) – Tensor to convert and cast.
tensor2 (Any) – Tensor to use as a reference.
- Returns:
Converted and casted tensor1.
- matchcake.utils.math.convert_and_cast_tensor_from_tensors(tensor: TensorLike, tensors: List[TensorLike], cast_priorities: List[Literal['numpy', 'autograd', 'jax', 'tf', 'torch']] = ('numpy', 'autograd', 'jax', 'tf', 'torch')) TensorLike ¶
Convert and cast the tensor to the same type as the tensors using the given priorities.
- Parameters:
tensor (TensorLike) – Tensor to convert and cast.
tensors (List[TensorLike]) – Tensors to use as a reference.
cast_priorities (List[Literal["numpy", "autograd", "jax", "tf", "torch"]]) – Priorities of the casting. Higher the index is, higher the priority.
- Returns:
Converted and casted tensor.
- Return type:
- matchcake.utils.math.convert_and_cast_tensors_to_same_type(tensors: List[TensorLike], cast_priorities: List[Literal['numpy', 'autograd', 'jax', 'tf', 'torch']] = ('numpy', 'autograd', 'jax', 'tf', 'torch')) List[TensorLike] ¶
Convert and cast the tensors to the same type using the given priorities.
- Parameters:
tensors (List[TensorLike]) – Tensors to convert and cast.
cast_priorities (List[Literal["numpy", "autograd", "jax", "tf", "torch"]]) – Priorities of the casting. Higher the index is, higher the priority.
- Returns:
Converted and casted tensors.
- Return type:
List[TensorLike]
- matchcake.utils.math.convert_like_and_cast_to(tensor, like, dtype=None)¶
Convert and cast the tensor to the same type as the tensor like.
- Parameters:
tensor (Any) – Tensor to convert and cast.
like (Any) – Tensor to use as a reference.
dtype (Any) – Data type to cast the tensor.
- Returns:
Converted and casted tensor.
- matchcake.utils.math.convert_tensors_to_same_type(tensors: List[TensorLike], cast_priorities: List[Literal['numpy', 'autograd', 'jax', 'tf', 'torch']] = ('numpy', 'autograd', 'jax', 'tf', 'torch')) List[TensorLike] ¶
Convert the tensors to the same type using the given priorities.
- Parameters:
tensors (List[TensorLike]) – Tensors to convert and cast.
cast_priorities (List[Literal["numpy", "autograd", "jax", "tf", "torch"]]) – Priorities of the casting. Higher the index is, higher the priority.
- Returns:
Converted and casted tensors.
- Return type:
List[TensorLike]
- matchcake.utils.math.convert_tensors_to_same_type_and_cast_to(tensors: List[TensorLike], cast_priorities: List[Literal['numpy', 'autograd', 'jax', 'tf', 'torch']] = ('numpy', 'autograd', 'jax', 'tf', 'torch'), dtype=None) List[TensorLike] ¶
Convert the tensors to the same type using the given priorities.
- Parameters:
tensors (List[TensorLike]) – Tensors to convert and cast.
cast_priorities (List[Literal["numpy", "autograd", "jax", "tf", "torch"]]) – Priorities of the casting. Higher the index is, higher the priority.
- Returns:
Converted and casted tensors.
- Return type:
List[TensorLike]
- matchcake.utils.math.dagger(tensor: Any) Any ¶
Compute the conjugate transpose of the tensor.
- Parameters:
tensor (Any) – Input tensor.
- Returns:
Conjugate transpose of the tensor.
- Return type:
Any
- matchcake.utils.math.det(tensor: Any) Any ¶
Compute the determinant of the tensor.
- Parameters:
tensor (Any) – Input tensor.
- Returns:
Determinant of the tensor.
- Return type:
Any
- matchcake.utils.math.exp_euler(x: TensorLike) TensorLike ¶
Compute the matrix exponential using the Euler formula.
\(e^{ix} = \cos(x) + i \sin(x)\)
- Parameters:
x (TensorLike) – input of the exponential.
- Returns:
The exponential of the input.
- Return type:
- matchcake.utils.math.exp_taylor_series(x: Any, terms: int = 18) Any ¶
Compute the matrix exponential using the Taylor series.
- Parameters:
x (Any) – input of the exponential.
terms (int) – Number of terms in the Taylor series.
- Returns:
The exponential of the input.
- Return type:
Any
- matchcake.utils.math.eye_block_matrix(matrix: TensorLike, n: int, index: int)¶
Take a matrix and insert it into a bigger eye matrix like this:
\[egin{pmatrix} I & 0 & 0 \ 0 & M & 0 \ 0 & 0 & I \end{pmatrix}\]where \(I\) is the identity matrix and \(M\) is the input matrix.
- Parameters:
matrix –
n –
index –
- Returns:
- matchcake.utils.math.eye_like(tensor: Any)¶
- matchcake.utils.math.fermionic_operator_matmul(first_matrix: Any, second_matrix: Any, direction: MatmulDirectionType = MatmulDirectionType.RL, operator: Literal['einsum', 'matmul', '@'] = '@')¶
Perform a matrix multiplication of two fermionic operator matrices with the given direction.
- Parameters:
first_matrix (Any) – First fermionic operator matrix.
second_matrix (Any) – Second fermionic operator matrix.
direction (Literal["rl", "lr"]) – Direction of the matrix multiplication. “rl” for right to left and “lr” for left to right. That means the result will be first_matrix @ second_matrix if direction is “rl” and second_matrix @ first_matrix
operator (Literal["einsum", "matmul", "@"]) – Operator to use for the matrix multiplication. “einsum” for einsum, “matmul” for matmul, “@” for __matmul__.
- Returns:
Result of the matrix multiplication.
- Return type:
Any
- matchcake.utils.math.get_like_tensors_of_highest_priority(tensors: List[TensorLike], cast_priorities: List[Literal['numpy', 'autograd', 'jax', 'tf', 'torch']] = ('numpy', 'autograd', 'jax', 'tf', 'torch')) TensorLike ¶
Convert and cast the tensors to the same type using the given priorities.
- Parameters:
tensors (List[TensorLike]) – Tensors to convert and cast.
cast_priorities (List[Literal["numpy", "autograd", "jax", "tf", "torch"]]) – Priorities of the casting. Higher the index is, higher the priority.
- Returns:
Converted and casted tensors.
- Return type:
List[TensorLike]
- matchcake.utils.math.logm(tensor, like=None)¶
Compute the matrix exponential of an array \(\ln{X}\).
- ..note::
This function is not differentiable with Autograd, as it relies on the scipy implementation.
- matchcake.utils.math.matmul(left: Any, right: Any, operator: Literal['einsum', 'matmul', '@'] = '@')¶
Perform a matrix multiplication of two matrices.
- Parameters:
left (Any) – Left matrix.
right (Any) – Right matrix.
operator (Literal["einsum", "matmul", "@"]) – Operator to use for the matrix multiplication. “einsum” for einsum, “matmul” for matmul, “@” for __matmul__.
- Returns:
Result of the matrix multiplication.
- Return type:
Any
- matchcake.utils.math.orthonormalize(tensor: Any, check_if_normalize: bool = True, raises_error: bool = False) Any ¶
Orthonormalize the tensor.
- ..math::
U, S, V = SVD(tensor) return U @ V
- Parameters:
tensor (Any) – Input tensor.
check_if_normalize (bool) – Whether to check if the tensor is already orthonormalized.
- Returns:
Orthonormalized tensor.
- Return type:
Any
- matchcake.utils.math.random_choice(a, probs, axis=-1)¶
- matchcake.utils.math.random_index(probs, n: int | None = None, axis=-1, normalize_probs: bool = True, eps: float = 1e-12)¶
- matchcake.utils.math.shape(tensor)¶
Get the shape of a tensor.
- Parameters:
tensor (Any) – The tensor.
- Returns:
The shape of the tensor.
- Return type:
tuple
- matchcake.utils.math.svd(tensor: Any) Tuple[Any, Any, Any] ¶
Compute the singular value decomposition of the tensor.
- Parameters:
tensor (Any) – Input tensor.
- Returns:
Singular value decomposition of the tensor.
- Return type:
Tuple[Any, Any, Any]
- matchcake.utils.math.unique_2d_array(array: TensorLike, sort: bool = False) TensorLike ¶
Get the unique rows of a 2D array.
- Parameters:
array (TensorLike) – 2D array.
sort (bool) – Whether to sort the unique rows.
- Returns:
Unique rows of the array.
- Return type:
matchcake.utils.operators module¶
- matchcake.utils.operators.adjoint_generator(op_iterator: Generator[Operation, None, None], **kwargs) Iterator[Operation] ¶
This function will reverse the order of the operations in the iterator and return the adjoint operations.
- Parameters:
op_iterator (Iterable[qml.operation.Operation]) – The iterator of operations.
kwargs – Additional keyword arguments.
- Returns:
The iterator of adjoint operations.
- matchcake.utils.operators.recursive_2in_operator(operator: Callable[[Any, Any], Any], __inputs: List[Any], recursive: bool = True) Any ¶
Apply an operator recursively to a list of inputs. The operator must accept two inputs. The inputs are applied from left to right.
# TODO: try to go from left to right and from right to left and compare the performance.
- Parameters:
operator (Callable[[Any, Any], Any]) – Operator to apply
__inputs (List[Any]) – Inputs to apply the operator to
recursive (bool) – If True, apply the operator recursively. If False, apply the operator iteratively using functools.reduce.
- Returns:
Result of the operator applied to the inputs
- Return type:
Any
- matchcake.utils.operators.recursive_kron(__inputs: ~typing.List[~typing.Any], lib=<module 'numpy' from '/home/runner/work/MatchCake/MatchCake/venv/lib/python3.10/site-packages/numpy/__init__.py'>, recursive: bool = True) Any ¶
matchcake.utils.torch_pfaffian module¶
- class matchcake.utils.torch_pfaffian.Pfaffian(*args, **kwargs)¶
Bases:
Function
- EPSILON = 1e-10¶
- static backward(ctx: FunctionCtx, grad_output)¶
- ..math:
frac{partial text{pf}(A)}{partial x_i} = frac{1}{2} text{pf}(A) tr(A^{-1} frac{partial A}{partial x_i})
- Parameters:
ctx – Context
grad_output – Gradient of the output
- Returns:
Gradient of the input
- static forward(ctx: FunctionCtx, matrix: Tensor)¶
Define the forward of the custom autograd Function.
This function is to be overridden by all subclasses. There are two ways to define forward:
Usage 1 (Combined forward and ctx):
@staticmethod def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any: pass
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
See combining-forward-context for more details
Usage 2 (Separate forward and ctx):
@staticmethod def forward(*args: Any, **kwargs: Any) -> Any: pass @staticmethod def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None: pass
The forward no longer accepts a ctx argument.
Instead, you must also override the
torch.autograd.Function.setup_context()
staticmethod to handle setting up thectx
object.output
is the output of the forward,inputs
are a Tuple of inputs to the forward.See extending-autograd for more details
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward()
if they are intended to be used inbackward
(equivalently,vjp
) orctx.save_for_forward()
if they are intended to be used for injvp
.
matchcake.utils.torch_utils module¶
- matchcake.utils.torch_utils.detach(x: Any)¶
- matchcake.utils.torch_utils.to_cpu(x: Any, dtype=torch.float64)¶
- matchcake.utils.torch_utils.to_cuda(x: Any, dtype=torch.float64)¶
- matchcake.utils.torch_utils.to_numpy(x: ~typing.Any, dtype=<class 'numpy.float64'>)¶
- matchcake.utils.torch_utils.to_tensor(x: Any, dtype=torch.float64)¶
- matchcake.utils.torch_utils.torch_wrap_circular_bounds(tensor, lower_bound: float = 0.0, upper_bound: float = 1.0)¶
Module contents¶
- matchcake.utils.binary_state_to_state(binary_state: ndarray | List[int | bool] | str) ndarray ¶
Convert a binary state to a state. The binary state is binary string of length \(2^n\) where \(n\) is the number of particles. The state is a vector of length \(2^n\) where \(n\) is the number of particles.
- Parameters:
binary_state (Union[np.ndarray, List[Union[int, bool]], str]) – Binary state
- Returns:
State
- Return type:
np.ndarray
- matchcake.utils.binary_string_to_state_number(binary_string: str) int ¶
Convert a binary string to a state number. The binary string is a string of 0s and 1s. The state number is an integer.
- Parameters:
binary_string (str) – Binary string
- Returns:
State number
- Return type:
int
- matchcake.utils.binary_string_to_vector(binary_string: str, encoding: str = 'ascii') ndarray ¶
Convert a binary string to a vector. The binary string is a string of 0s and 1s. The vector is a vector of integers.
- Parameters:
binary_string (str) – Binary string
encoding (str) – Encoding of the binary string. Default is ascii.
- Returns:
Vector
- matchcake.utils.camel_case_to_spaced_camel_case(__string: str) str ¶
Convert a camel case string to a spaced camel case string. The conversion is done by adding a space before every capital letter.
- Parameters:
__string (str) – Camel case string
- Returns:
Spaced camel case string
- Return type:
str
- matchcake.utils.check_if_imag_is_zero(__matrix: ndarray, eps: float = 1e-05) bool ¶
Check if the imaginary part of a matrix is zero.
- Parameters:
__matrix – Matrix to check
eps – Tolerance for the imaginary part
- Returns:
True if the imaginary part is zero, False otherwise
- matchcake.utils.decompose_binary_state_into_majorana_indexes(__binary_state: ndarray | List[int | bool] | str) ndarray ¶
Decompose a state into Majorana operators. The state is decomposed as
\[|x> = c_{2p_{1}} ... c_{2p_{\ell}} |0>\]where \(|x>\) is the state, \(c_i\) are the Majorana operators, \(p_i\) are the indices of the Majorana operators and \(\ell\) is the hamming weight of the state.
Note: The state must be a pure state in the computational basis.
- Parameters:
__binary_state (Union[np.ndarray, List[Union[int, bool]], str]) – Input state as a binary string.
- Returns:
Indices of the Majorana operators
- Return type:
np.ndarray
- matchcake.utils.decompose_matrix_into_majoranas(__matrix: ndarray, majorana_getter: MajoranaGetter | None = None) ndarray ¶
Decompose a matrix into Majorana operators. The matrix is decomposed as
\[\mathbf{M} = \sum_{i=0}^{2^{n}-1} m_i c_i\]where \(\mathbf{M}\) is the matrix, \(m_i\) are the coefficients of the matrix, \(n\) is the number of particles and \(c_i\) are the Majorana operators.
- Parameters:
__matrix (np.ndarray) – Input matrix
majorana_getter (Optional[MajoranaGetter]) – Majorana getter
- Returns:
Coefficients of the Majorana operators
- Return type:
np.ndarray
- matchcake.utils.decompose_state_into_majorana_indexes(__state: int | ndarray | sparray, n: int | None = None) ndarray ¶
Decompose a state into Majorana operators. The state is decomposed as
\[|x> = c_{2p_{1}} ... c_{2p_{\ell}} |0>\]where \(|x>\) is the state, \(c_i\) are the Majorana operators, \(p_i\) are the indices of the Majorana operators and \(\ell\) is the hamming weight of the state.
Note: The state must be a pure state in the computational basis.
- Parameters:
__state (Union[int, np.ndarray, sparse.sparray]) – Input state
n (Optional[int]) – Number of particles. Used only if the state is an integer.
- Returns:
Indices of the Majorana operators
- Return type:
np.ndarray
- matchcake.utils.get_4x4_non_interacting_fermionic_hamiltonian_from_params(params)¶
Compute the non-interacting fermionic Hamiltonian from the parameters of the Matchgate model.
- Parameters:
params (MatchgateParams) – Parameters of the Matchgate model
- Returns:
Non-interacting fermionic Hamiltonian
- Return type:
np.ndarray
- matchcake.utils.get_all_subclasses(__class, include_base_cls: bool = False) set ¶
Get all the subclasses of a class.
- Parameters:
__class (Any) – Class
include_base_cls (bool) – Include the base class in the set of subclasses
- Returns:
Subclasses
- Return type:
set
- matchcake.utils.get_block_diagonal_matrix(n: int) ndarray ¶
Construct the special block diagonal matrix of shape (2n x 2n) defined as
\[\begin{split}\mathbf{B} = \oplus_{j=1}^{n} \begin{pmatrix} 1 & i \\ -i & 1 \end{pmatrix}\end{split}\]where \(\oplus\) is the direct sum operator, \(n\) is the number of particles and \(i\) is the imaginary unit.
- Parameters:
n (int) – Number of particles
- Returns:
Block diagonal matrix of shape (2n x 2n)
- matchcake.utils.get_eigvals_on_z_basis(op: Operation, raise_on_failure: bool = False, options_on_failure: dict | None = None) TensorLike ¶
Get the eigenvalues of the operator on the Z basis. At first, the eigenvalues are computed by extracting the diagonal elements of the operator matrix. If the computation fails, the eigenvalues are computed by calling the qml.eigvals function. The problem with the last one is that we have no guarantee that the eigenvalues are ordered and based on the Z basis.
- Parameters:
op (qml.Operation) – Operator
raise_on_failure (bool) – Raise an exception if the computation fails. Default is False.
options_on_failure (Optional[dict]) – Options to pass to the qml.eigvals function if the computation fails.
- Returns:
Eigenvalues of the operator on the Z basis.
- Return type:
qml.TensorLike
- matchcake.utils.get_hamming_weight(state: ndarray) int ¶
Compute the Hamming weight of a state. The Hamming weight is defined as the number of non-zero elements in the state.
The binary state is a one hot vector of shape (2^n,) where n is the number of particles. The Hamming weight is the number of states in the state [0, 1].
- Parameters:
state (np.ndarray) – State of the system
- Returns:
Hamming weight of the state
- Return type:
int
- matchcake.utils.get_non_interacting_fermionic_hamiltonian_from_coeffs(hamiltonian_coefficients_matrix, energy_offset=0.0, lib=<module 'pennylane.numpy' from '/home/runner/work/MatchCake/MatchCake/venv/lib/python3.10/site-packages/pennylane/numpy/__init__.py'>)¶
Compute the non-interacting fermionic Hamiltonian from the coefficients of the Majorana operators.
\[H = -i\sum_{\mu,\nu = 0}^{2n-1} h_{\mu \nu} c_\mu c_\nu + \epsilon \mathbb{I}\]where \(h_{\mu \nu}\) are the coefficients of the Majorana operators \(c_\mu\) and \(c_\nu\), \(n\) is the number of particles, \(\mu\), \(\nu\) are the indices of the Majorana operators, \(\epsilon\) is the energy offset and \(\mathbb{I}\) is the identity matrix.
- TODO: optimize the method by changing the sum for a matrix multiplication as \(H = i C^T h C\) where \(C\)
is the matrix of Majorana operators.
TODO: use multiprocessing to parallelize the computation of the matrix elements.
- Parameters:
hamiltonian_coefficients_matrix (np.ndarray) – Coefficients of the Majorana operators. Must be a square matrix of shape \((2n, 2n)\).
energy_offset (float) – Energy offset
lib – Library to use for the operations
- Returns:
Non-interacting fermionic Hamiltonian
- matchcake.utils.get_probabilities_from_state(state: ndarray, wires=None) ndarray ¶
Compute the probabilities from a state. The probabilities are defined as
\[p_i = |x_i|^2\]where \(|x_i>\) is the state.
- Parameters:
state (np.ndarray) – State of the system
wires (list[int]) – Wires to consider
- Returns:
Probabilities
- Return type:
np.ndarray
- matchcake.utils.get_unitary_from_hermitian_matrix(matrix: ndarray) ndarray ¶
Get the unitary matrix from a Hermitian matrix. The unitary matrix is defined as
\[U = e^{-iH}\]where \(H\) is the Hermitian matrix and \(i\) is the imaginary unit.
- Parameters:
matrix (np.ndarray) – Hermitian matrix
- Returns:
Unitary matrix
- Return type:
np.ndarray
- matchcake.utils.load_backend_lib(backend)¶
- matchcake.utils.make_single_particle_transition_matrix_from_gate(u: Any, majorana_getter: MajoranaGetter | None = None) Any ¶
Compute the single particle transition matrix. This matrix is the matrix \(R\) such that
\[R_{\mu\nu} &= \frac{1}{4} \text{Tr}{\left(U c_\mu U^\dagger\right)c_\nu}\]where \(U\) is the matchgate and \(c_\mu\) is the \(\mu\)-th Majorana operator.
- Note:
This operation is of polynomial complexity only when the number of particles is equal or less than 2 and of exponential complexity otherwise.
- Parameters:
u – Matchgate matrix of shape (…, 2^n, 2^n)
majorana_getter (Optional[MajoranaGetter]) – Majorana getter of n particles
- Returns:
The single particle transition matrix of shape (…, 2n, 2n)
- matchcake.utils.make_transition_matrix_from_action_matrix(action_matrix)¶
Compute the transition matrix from the action matrix. The transition matrix is defined as \(\mathbf{T}\) such that
\[\mathbf{T}_{i,\nu} = \frac{1}{2} \left( \mathbf{A}^T_{2i-1,\nu} + i \mathbf{A}^T_{2i,\nu} \right)\]where \(\mathbf{A}\) is the action matrix of shape (2n x 2n), \(\mathbf{T}\) is the transition matrix of shape (n x 2n), \(i\) goes from 1 to \(n\) and \(\nu\) goes from 1 to \(2n\).
- Parameters:
action_matrix –
- Returns:
- matchcake.utils.make_wires_continuous(wires: Wires | ndarray)¶
- matchcake.utils.skew_antisymmetric_vector_to_matrix(__vector) ndarray ¶
Compute the skew-antisymmetric matrix from a vector. The skew-antisymmetric (NxN) matrix is defined as
\[\begin{split}\mathbf{A} = \begin{pmatrix} 0 & a_0 & a_1 & a_2 & \dots & a_{N-1} \\ -a_0 & 0 & a_{N} & a_{N+1} & \dots & a_{2N-2} \\ -a_1 & -a_{N} & 0 & a_{2N} & \dots & a_{3N-3} \\ -a_2 & -a_{N+1} & -a_{2N} & 0 & \dots & a_{4N-4} \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ -a_{N-1} & -a_{2N-2} & -a_{3N-3} & -a_{4N-4} & \dots & 0 \end{pmatrix}\end{split}\]where \(a_i\) are the elements of the vector \(\mathbf{a}\) of length \(N(N-1)/2\).
- Note:
The length of the vector must be \(N(N-1)/2\) where \((N, N)\) is the shape of the matrix.
- Parameters:
__vector – Vector of length \(N(N-1)/2\)
- Returns:
Skew-antisymmetric matrix of shape \((N, N)\)
- matchcake.utils.state_to_binary_state(state: int | ndarray | sparray, n: int | None = None) ndarray ¶
Convert a state to a binary state. The binary state is binary string of length \(2^n\) where \(n\) is the number of particles. The state is a vector of length \(2^n\) where \(n\) is the number of particles.
- Parameters:
state (Union[np.ndarray, sparse.sparray]) – State. If the state is an integer, the state is assumed to be a pure state in the computational basis and the number of particles must be specified. If the state is a vector, the number of particles is inferred from the shape of the vector as \(n = \log_2(\text{len}(\text{state}))\).
n (Optional[int]) – Number of particles. Used only if the state is an integer.
- Returns:
Binary state as a binary string.
- Return type:
np.ndarray
>>> state_to_binary_state(0, n=2) array([0, 0]) >>> state_to_binary_state(1, n=2) array([0, 1]) >>> state_to_binary_state(2, n=2) array([1, 0]) >>> state_to_binary_state(3, n=2) array([1, 1]) >>> state_to_binary_state(np.array([1, 0, 0, 0])) array([0, 0]) >>> state_to_binary_state(np.array([0, 1, 0, 0])) array([0, 1]) >>> state_to_binary_state(np.array([0, 0, 1, 0])) array([1, 0]) >>> state_to_binary_state(np.array([0, 0, 0, 1])) array([1, 1]) >>> state_to_binary_state(np.array([1, 0, 0, 0, 0, 0, 0, 0])) array([0, 0, 0])
- matchcake.utils.state_to_binary_string(state: int | ndarray | sparray, n: int | None = None) str ¶
Convert a state to a binary state. The binary state is binary string of length \(2^n\) where \(n\) is the number of particles. The state is a vector of length \(2^n\) where \(n\) is the number of particles.
- Parameters:
state (Union[np.ndarray, sparse.sparray]) – State. If the state is an integer, the state is assumed to be a pure state in the computational basis and the number of particles must be specified. If the state is a vector, the number of particles is inferred from the shape of the vector as \(n = \log_2(\text{len}(\text{state}))\).
n (Optional[int]) – Number of particles. Used only if the state is an integer.
- Returns:
Binary state as a binary string.
- Return type:
str
>>> state_to_binary_string(0, n=2) '00' >>> state_to_binary_string(1, n=2) '01' >>> state_to_binary_string(2, n=2) '10' >>> state_to_binary_string(3, n=2) '11' >>> state_to_binary_string(np.array([1, 0, 0, 0])) '00' >>> state_to_binary_string(np.array([0, 1, 0, 0])) '01' >>> state_to_binary_string(np.array([0, 0, 1, 0])) '10' >>> state_to_binary_string(np.array([0, 0, 0, 1])) '11' >>> state_to_binary_string(np.array([1, 0, 0, 0, 0, 0, 0, 0])) '000'