domid.compos package

Submodules

domid.compos.DEC_clustering_layer module

class domid.compos.DEC_clustering_layer.DECClusteringLayer(n_clusters=10, hidden=10, cluster_centers=None, alpha=1.0, device='cpu')[source]

Bases: Module

__init__(n_clusters=10, hidden=10, cluster_centers=None, alpha=1.0, device='cpu')[source]
Parameters:
  • n_clusters – The number of clusters.

  • hidden – The size of the hidden layer.

  • cluster_centers – The initial cluster centers.

  • alpha – The alpha parameter for the Student’s t-distribution.

  • device – The device to use (e.g. ‘cpu’, ‘cuda’).

forward(x)[source]

Performs forward propagation on the ClusteringLayer. Corresponds to equation (1) from the paper.

Parameters:

x – input tensor of feature representations.

Return t_dist:

The soft cluster assignments.

domid.compos.GNN module

class domid.compos.GNN.GNN(n_input, n_enc_1, n_enc_2, n_enc_3, n_z, n_clusters, device)[source]

Bases: Module

__init__(n_input, n_enc_1, n_enc_2, n_enc_3, n_z, n_clusters, device)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x, adj, tra1, tra2, tra3, z, sigma=0.5)[source]
Parameters:
  • x – image batch

  • adj – adjacency matrix from the constructed graph for the batch of images

  • tra1 – features from the first layer of the encoder

  • tra2 – features from the second layer of the encoder

  • tra3 – features from the third layer of the encoder

  • z – latent features from the encoder

  • sigma

Returns:

hidden layer that is used for clustering

domid.compos.GNN_layer module

class domid.compos.GNN_layer.GNNLayer(in_features, out_features, device)[source]

Bases: Module

__init__(in_features, out_features, device)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(features, adj, activation=ReLU())[source]
Parameters:
  • features – features from specific layer of the encoder

  • adj – adjecency matrix from the constructed graph

  • activation

Returns:

hidden layer of GNN

domid.compos.VAE_blocks module

domid.compos.VAE_blocks.get_output_shape(model, image_dim)[source]
domid.compos.VAE_blocks.cnn_encoding_block(in_c, out_c, kernel_size=(4, 4), stride=2, padding=1)[source]
domid.compos.VAE_blocks.cnn_decoding_block(in_c, out_c, kernel_size=(3, 3), stride=2, padding=1)[source]
class domid.compos.VAE_blocks.UnFlatten(num_channels)[source]

Bases: Module

__init__(num_channels)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

domid.compos.VAE_blocks.linear_block(in_c, out_c)[source]

domid.compos.cnn_AE module

class domid.compos.cnn_AE.ConvolutionalEncoder(zd_dim, num_channels=3, num_filters=[32, 64, 128], i_w=28, i_h=28, k=[3, 3, 3])[source]

Bases: Module

__init__(zd_dim, num_channels=3, num_filters=[32, 64, 128], i_w=28, i_h=28, k=[3, 3, 3])[source]

AE Encoder

Parameters:
  • zd_dim – dimension of the latent space

  • num_channels – number of channels of the input

  • num_filters – list of number of filters for each convolutional layer

  • i_w – width of the input

  • i_h – height of the input

  • k – list of kernel sizes for each convolutional layer

get_z(x)[source]
get_log_sigma2(x)[source]
forward(x)[source]
Parameters:

x – input data

class domid.compos.cnn_AE.ConvolutionalDecoder(prior, zd_dim, domain_dim, h_dim, num_channels=3, num_filters=[32, 64, 128], k=[4, 4, 4])[source]

Bases: Module

__init__(prior, zd_dim, domain_dim, h_dim, num_channels=3, num_filters=[32, 64, 128], k=[4, 4, 4])[source]

AE Decoder

Parameters:
  • zd_dim – dimension of the latent space, which is the input space of the decoder

  • h_dim – dimension of the first hidden layer, which is a linear layer

  • num_channels – number of channels of the output; the output will have twice as many channels, e.g., 3 channels for the mean and 3 channels for log-sigma if num_channels is 3

  • num_filters – list of number of filters for each convolutional layer, given in reverse order

  • k – list of kernel sizes for each convolutional layer

forward(z)[source]
Parameters:

z – latent space representation

Return x_pro:

reconstructed data, which is assumed to have 3 channels, but the channels are assumed to be equal to each other.

domid.compos.cnn_VAE module

class domid.compos.cnn_VAE.ConvolutionalEncoder(zd_dim, num_channels=3, num_filters=[32, 64, 128], i_w=28, i_h=28, k=[3, 3, 3])[source]

Bases: Module

__init__(zd_dim, num_channels=3, num_filters=[32, 64, 128], i_w=28, i_h=28, k=[3, 3, 3])[source]

VAE Encoder

Parameters:
  • zd_dim – dimension of the latent space

  • num_channels – number of channels of the input

  • num_filters – list of number of filters for each convolutional layer

  • i_w – width of the input

  • i_h – height of the input

  • k – list of kernel sizes for each convolutional layer

get_z(x)[source]
get_log_sigma2(x)[source]
forward(x)[source]
Parameters:

x – input data

class domid.compos.cnn_VAE.ConvolutionalDecoder(prior, zd_dim, domain_dim, h_dim, num_channels=3, num_filters=[32, 64, 128], k=[4, 4, 4])[source]

Bases: Module

__init__(prior, zd_dim, domain_dim, h_dim, num_channels=3, num_filters=[32, 64, 128], k=[4, 4, 4])[source]

VAE Decoder

Parameters:
  • zd_dim – dimension of the latent space, which is the input space of the decoder

  • h_dim – dimension of the first hidden layer, which is a linear layer

  • num_channels – number of channels of the output; the output will have twice as many channels, e.g., 3 channels for the mean and 3 channels for log-sigma if num_channels is 3

  • num_filters – list of number of filters for each convolutional layer, given in reverse order

  • k – list of kernel sizes for each convolutional layer

forward(z)[source]
Parameters:

z – latent space representation

Return x_pro:

reconstructed data, which is assumed to have 3 channels, but the channels are assumed to be equal to each other.

Return x_log_sigma2:

log-variance of the reconstructed data

domid.compos.linear_AE module

class domid.compos.linear_AE.LinearEncoderAE(n_enc_1, n_enc_2, n_enc_3, n_input, n_z)[source]

Bases: Module

__init__(n_enc_1, n_enc_2, n_enc_3, n_input, n_z)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

get_z(x)[source]
get_log_sigma2(x)[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class domid.compos.linear_AE.LinearDecoderAE(n_dec_1, n_dec_2, n_dec_3, n_input, n_z)[source]

Bases: Module

__init__(n_dec_1, n_dec_2, n_dec_3, n_input, n_z)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(z)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

domid.compos.linear_VAE module

class domid.compos.linear_VAE.LinearEncoder(zd_dim, input_dim=(3, 28, 28), features_dim=[500, 500, 2000])[source]

Bases: Module

__init__(zd_dim, input_dim=(3, 28, 28), features_dim=[500, 500, 2000])[source]

VAE Encoder with linear layers

Parameters:
  • zd_dim – dimension of the latent space

  • input_dim – dimensions of the input, e.g., (3, 28, 28) for MNIST in RGB format

  • features_dim – list of dimensions of the hidden layers

get_z(x)[source]
get_log_sigma2(x)[source]
forward(x)[source]
Parameters:

x – input data, assumed to have 3 channels

class domid.compos.linear_VAE.LinearDecoder(prior, zd_dim, input_dim=(3, 28, 28), features_dim=[500, 500, 2000])[source]

Bases: Module

__init__(prior, zd_dim, input_dim=(3, 28, 28), features_dim=[500, 500, 2000])[source]

VAE Decoder

Parameters:
  • zd_dim – dimension of the latent space

  • input_dim – dimension of the original input / output reconstruction, e.g., (3, 28, 28) for MNIST in RGB format

  • features_dim – list of dimensions of the hidden layers, given in reverse order

forward(z)[source]
Parameters:

z – latent space representation

Return x_pro:

reconstructed data, which is assumed to have 3 channels, but the channels are assumed to be equal to each other.

domid.compos.nn_net module

class domid.compos.nn_net.Net_MNIST(y_dim, img_size)[source]

Bases: Module

__init__(y_dim, img_size)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

probe(img_size)[source]
conv_op(x)[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

domid.compos.nn_net.test_Net_MNIST()[source]

domid.compos.predict_basic module

class domid.compos.predict_basic.Prediction(model, device, loader_tr, loader_val, i_h, i_w, bs)[source]

Bases: object

__init__(model, device, loader_tr, loader_val, i_h, i_w, bs)[source]
mk_prediction()[source]

This function is used for ease of storing the results. Predictions are made for the training images using currect state of the model.

Returns:

tensor of input dateset images

Returns:

Z space representations of the input images through the current model

Returns:

predicted domain/cluster labels

Returns:

image acquisition machine labels for the input images (when applicable/available)

epoch_tr_acc()[source]

This function used to calculate accuracy and confusion matrix for training set for both vec_d and vec_y labels and predictions.

epoch_val_acc()[source]

This function used to calculate accuracy and confusion matrix for validation set for both vec_d and vec_y labels and predictions.

epoch_tr_correlation()[source]

This function used to calculate correlation with HER2 scores for training set. Only used for HER2 dataset/task.

epoch_val_correlation()[source]

This function used to calculate correlation with HER2 scores for valiation set. Only used for HER2 dataset/task.

domid.compos.tensorboard_fun module

domid.compos.tensorboard_fun.tensorboard_write(writer, model, epoch, lr, warmup_beta, acc_tr, loss, pretraining_finished, tensor_x, inject_tensor=None, other_info=None)[source]

Module contents