- class ModelTrainEmbeddings(data: Graph, loss_function: Dict, device: torch.device, conv: str = 'GCN', tune_out: bool = False)
Bases:
object
Model for training Net, which building embeddings for Geom-GCN layer
- Parameters:
data – (Graph): Input Graph
loss_function – (dict): Dict of parameters of unsupervised loss function
conv – (str): Name of convolution (default:’GCN’)
device – (device): Either ‘cuda’ or ‘cpu’ (default:’cuda’)
tune_out – (bool): Flag if you want tuning out layer or if it 2 for GeomGCN
- run(params: Dict) torch_geometric.typing.Tensor
Learn embeddings
- Parameters:
params – dict[str,float,int,float]: Parameters for learning: size of hidden layer, dropout, number of layers for the model, learning rate
- Returns:
(Tensor): The output embeddings
- class OptunaTrainEmbeddings(data: Graph, loss_function: Dict, device: torch.device, conv: str = 'GCN', tune_out: bool = False)
Bases:
ModelTrainEmbeddings
Model for training Net, wcich building embeddings for Geom-GCN layer
- Parameters:
loss_function – (dict): Dict of parameters of unsupervised loss function
conv – (str): Name of convolution (default:’GCN’)
device – (device): Either ‘cuda’ or ‘cpu’ (default:’cuda’)
- run(number_of_trials: int) Dict[Any, Any]
Tuning parameters for learning embeddings
- Parameters:
number_of_trials – (int): Number of trials for optuna
- Returns:
(dict[str,float,int,float]): Learned parameters: size of hidden layer, dropout, number of layers for the model, learning rate