matchcake.ml.torch_models package¶
Submodules¶
matchcake.ml.torch_models.nif_torch_model module¶
- class matchcake.ml.torch_models.nif_torch_model.NIFTorchModel(*, n_qubits: int | None = None, **kwargs)¶
Bases:
TorchModel
- ATTRS_TO_HPARAMS = ['use_cuda', 'seed', 'max_grad_norm', 'learning_rate', 'optimizer', 'params_init', 'fit_patience', 'n_qubits']¶
- DEFAULT_N_QUBITS = None¶
- MODEL_NAME = 'NIFTorchModel'¶
- __init__(*, n_qubits: int | None = None, **kwargs)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- classmethod add_model_specific_args(parent_parser: ArgumentParser | None = None)¶
- circuit(*args, **kwargs)¶
- property wires¶
matchcake.ml.torch_models.torch_model module¶
- class matchcake.ml.torch_models.torch_model.TorchModel(*, use_cuda: bool = False, seed: int = 0, save_root: str | None = '/home/runner/work/MatchCake/MatchCake/sphinx/data/models', save_dir: str | None = None, max_grad_norm: float = 1.0, learning_rate: float = 0.0002, optimizer: str = 'SimulatedAnnealing', params_init: str = 'Random', fit_patience: int | None = 10, **kwargs)¶
Bases:
Module
- ATTRS_TO_HPARAMS = ['use_cuda', 'seed', 'max_grad_norm', 'learning_rate', 'optimizer', 'params_init', 'fit_patience']¶
- ATTRS_TO_JSON = ['fit_history']¶
- ATTRS_TO_PICKLE = ['fit_time', 'start_fit_time', 'end_fit_time']¶
- ATTRS_TO_STATE_DICT = []¶
- DEFAULT_FIT_PATIENCE = 10¶
- DEFAULT_LEARNING_RATE = 0.0002¶
- DEFAULT_LOG_FUNC()¶
print(value, …, sep=’ ‘, end=’n’, file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default. Optional keyword arguments: file: a file-like object (stream); defaults to the current sys.stdout. sep: string inserted between values, default a space. end: string appended after the last value, default a newline. flush: whether to forcibly flush the stream.
- DEFAULT_MAX_GRAD_NORM = 1.0¶
- DEFAULT_OPTIMIZER = 'SimulatedAnnealing'¶
- DEFAULT_PARAMETERS_INITIALISATION_STRATEGY = 'Random'¶
- DEFAULT_SAVE_DIR = None¶
- DEFAULT_SAVE_ROOT = '/home/runner/work/MatchCake/MatchCake/sphinx/data/models'¶
- DEFAULT_SEED = 0¶
- DEFAULT_USE_CUDA = False¶
- MODEL_NAME = 'TorchModel'¶
- __init__(*, use_cuda: bool = False, seed: int = 0, save_root: str | None = '/home/runner/work/MatchCake/MatchCake/sphinx/data/models', save_dir: str | None = None, max_grad_norm: float = 1.0, learning_rate: float = 0.0002, optimizer: str = 'SimulatedAnnealing', params_init: str = 'Random', fit_patience: int | None = 10, **kwargs)¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- classmethod add_model_specific_args(parent_parser: ArgumentParser | None = None)¶
- property best_model_path¶
- cast_tensor_to_interface(tensor)¶
- cost(*args, **kwargs) TensorLike ¶
This function should return the overall cost of the model. If the cost depend on a dataset, this function should be overridden and the cost should be computed using the whole dataset.
This function will be called by the fit_closure function which will be used by the optimizer to optimize the model’s parameters.
- Note:
The arguments and keyword arguments passed to the fit function are stored in the fit_args and fit_kwargs. They could be useful to compute the cost in this function. The dataset can be passed as a keyword argument to the fit function and accessed in this function using kwargs.get(“dataset”, None) as an example.
- Parameters:
args – Arguments to pass to the forward function.
kwargs – Keyword arguments to pass to the forward function.
- Returns:
The overall cost of the model.
- classmethod default_save_dir_from_args(args)¶
- draw_mpl(fig: Figure | None = None, ax: Axes | None = None, **kwargs)¶
- fit(*args, n_iterations: int = 100, n_init_iterations: int = 1, **kwargs)¶
- fit_callback(*args, **kwargs)¶
- fit_closure(parameters: List[Parameter] | None = None, *args, **kwargs) TensorLike ¶
Assigns the parameters to the model and returns the cost.
- Parameters:
parameters (Optional[List[torch.nn.Parameter]]) – The parameters to assign to the model.
args (Any) – Arguments to pass to the cost function.
kwargs (Any) – Keyword arguments to pass to the cost
- Returns:
The cost.
- forward(*args: Any, **kwargs: Any) Any ¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classmethod from_folder(folder: str, model_args: Sequence[Any] | None = None, model_kwargs: Dict[str, Any] | None = None, **kwargs) TorchModel ¶
- classmethod from_folder_or_new(folder: str, model_args: Sequence[Any] | None = None, model_kwargs: Dict[str, Any] | None = None, **kwargs) TorchModel ¶
- property hparams_path¶
- initialize_parameters_()¶
- property jsons_path¶
- load(model_path: str | None = None, load_hparams: bool = True, **kwargs) TorchModel ¶
- load_best(**kwargs) TorchModel ¶
- load_best_if_exists(**kwargs) TorchModel ¶
- load_hparams() TorchModel ¶
- load_if_exists(model_path: str | None = None, load_hparams: bool = True, **kwargs) TorchModel ¶
- load_jsons() TorchModel ¶
- load_pickles() TorchModel ¶
- load_state_dict(state_dict, strict=True)¶
Copy parameters and buffers from
state_dict
into this module and its descendants.If
strict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.Warning
If
assign
isTrue
the optimizer must be created after the call toload_state_dict
unlessget_swap_module_params_on_conversion()
isTrue
.- Parameters:
state_dict (dict) – a dict containing parameters and persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
assign (bool, optional) – When set to
False
, the properties of the tensors in the current module are preserved whereas setting it toTrue
preserves properties of the Tensors in the state dict. The only exception is therequires_grad
field ofDefault: ``False`
- Returns:
- missing_keys is a list of str containing any keys that are expected
by this module but missing from the provided
state_dict
.
- unexpected_keys is a list of str containing the keys that are not
expected by this module but present in the provided
state_dict
.
- Return type:
NamedTuple
withmissing_keys
andunexpected_keys
fields
Note
If a parameter or buffer is registered as
None
and its corresponding key exists instate_dict
,load_state_dict()
will raise aRuntimeError
.
- property model_path¶
- property pickles_path¶
- plot_fit_history(fig=None, ax=None, **kwargs)¶
- predict(*args, **kwargs)¶
- save(model_path: str | None = None) TorchModel ¶
- save_best() TorchModel ¶
- save_hparams() TorchModel ¶
- save_jsons() TorchModel ¶
- save_metrics(metrics: Dict[str, Any], filename: str = 'metrics.json')¶
- property save_path¶
- save_pickles() TorchModel ¶
- score(*args, **kwargs)¶
- state_dict(*args, destination=None, prefix='', keep_vars=False)¶
Return a dictionary containing references to the whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to
None
are not included.Note
The returned object is a shallow copy. It contains references to the module’s parameters and buffers.
Warning
Currently
state_dict()
also accepts positional arguments fordestination
,prefix
andkeep_vars
in order. However, this is being deprecated and keyword arguments will be enforced in future releases.Warning
Please avoid the use of argument
destination
as it is not designed for end-users.- Parameters:
destination (dict, optional) – If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an
OrderedDict
will be created and returned. Default:None
.prefix (str, optional) – a prefix added to parameter and buffer names to compose the keys in state_dict. Default:
''
.keep_vars (bool, optional) – by default the
Tensor
s returned in the state dict are detached from autograd. If it’s set toTrue
, detaching will not be performed. Default:False
.
- Returns:
a dictionary containing a whole state of the module
- Return type:
dict
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight']
- property torch_device¶
- update_parameters(parameters: List[Parameter | Tuple[str, Parameter]])¶