matrice.projects module#
Module for interacting with backend API to manage projects.
- class matrice.projects.Projects(session, project_name=None, project_id=None)[source]#
Bases:
object
A class for handling project-related operations using the backend API.
- account_number#
The account number associated with the session.
- Type:
str
- project_name#
The name of the project.
- Type:
str
- project_id#
The ID of the project (initialized in the constructor).
- Type:
str
- project_input#
The input type for the project (initialized in the constructor).
- Type:
str
- output_type#
The output type for the project (initialized in the constructor).
- Type:
str
- Parameters:
session (Session) – The session object used for API interactions.
project_name (str) – The name of the project.
- __init__(session, project_name=None, project_id=None)[source]#
Initialize a Projects object with project details.
- Parameters:
session (Session) – The session object used for API interactions.
project_name (str) – The name of the project.
- change_status(enable=True)[source]#
Enables or disable a project. It is set to enable by default.
- Parameters:
type (str) – The type of action to perform: “enable” or “disable”.
- Returns:
A tuple containing: - A success message if the project is enabled or disabled successfully. - An error message if the action fails.
- Return type:
tuple
Example
>>> success_message, error = project.change_status(enable=True) >>> if error: >>> print(f"Error: {error}") >>> else: >>> print(success_message)
- create_annotation(project_type, ann_title, dataset_id, dataset_version, labels, only_unlabeled, is_ML_assisted, labellers, reviewers, guidelines)[source]#
Create a new annotation for a dataset.
- Parameters:
project_type (str) – The type of the project for which the annotation is being created.
ann_title (str) – The title of the annotation.
dataset_id (str) – The ID of the dataset to annotate.
dataset_version (str) – The version of the dataset.
labels (list) – The list of labels for the annotation.
only_unlabeled (bool) – Whether to annotate only unlabeled data.
is_ML_assisted (bool) – Whether the annotation is ML-assisted.
labellers (list) – The list of labellers for the annotation.
reviewers (list) – The list of reviewers for the annotation.
guidelines (str) – The guidelines for the annotation.
- Returns:
A tuple containing: - An Annotation object for the created annotation. - An Action object related to the annotation creation process.
- Return type:
tuple
Example
>>> annotation, action = project.create_annotation("object_detection", "MyAnnotation", "dataset123", "v1.0", ["label1", "label2"], True, False, [{"email": "user-email", "name": "username", "percentageWork": '100'}],[{"email": "user-email", "name": "username", "percentageWork": '100'}], "Follow these guidelines") >>> if action: >>> print(f"Annotation created: {annotation}") >>> else: >>> print(f"Error: {annotation}")
- create_deployment(deployment_name, model_id, gpu_required=True, auto_scale=False, auto_shutdown=True, shutdown_threshold=5, compute_alias='', model_type='trained', runtime_framework='Pytorch')[source]#
Create a deployment for a model.
- Parameters:
deployment_name (str) – The name of the deployment.
model_id (str) – The ID of the model to be deployed.
gpu_required (bool, optional) – Flag to indicate if GPU is required for the deployment (default is True).
auto_scale (bool, optional) – Flag to indicate if auto-scaling is enabled (default is False).
auto_shutdown (bool, optional) – Flag to indicate if auto-shutdown is enabled (default is True).
shutdown_threshold (int, optional) – The threshold for auto-shutdown (default is 5).
compute_alias (str, optional) – The alias for the compute (default is an empty string).
- Returns:
A tuple containing: - A Deployment object for the created deployment. - An Action object related to the deployment process.
- Return type:
tuple
Example
>>> deployment, action = project.create_deployment("Deployment1", "model123", auto_scale=True) >>> if action: >>> print(f"Deployment created: {deployment}") >>> else: >>> print(f"Error: {deployment}")
- create_experiment(name, dataset_id, target_run_time, dataset_version, primary_metric, matrice_compute=True, models_trained=[], performance_trade_off=-1)[source]#
Create a new experiment for model training.
- Parameters:
name (str) – The name of the experiment.
dataset_id (str) – The ID of the dataset to be used in the experiment.
target_run_time (str) – The target runtime for the experiment.
dataset_version (str) – The version of the dataset.
primary_metric (str) – The primary metric to evaluate the experiment.
matrice_compute (bool, optional) – Flag to indicate whether to use matrix compute (default is True).
models_trained (list, optional) – List of models that have been trained in the experiment (default is an empty list).
performance_trade_off (float, optional) – The performance trade-off for the experiment (default is -1).
- Returns:
A tuple containing: - An Experiment object for the created experiment. - An error message if the response indicates an error, or None if successful. - A status message describing the result of the operation.
- Return type:
tuple
Example
>>> experiment, error, message = project.create_experiment("Experiment1", "dataset123", "runtimeA", "v1.0", "accuracy") >>> if error: >>> print(f"Error: {error}") >>> else: >>> print(f"Experiment created: {experiment}")
- create_model_export(model_train_id, export_formats, model_config, is_gpu_required=False)[source]#
Add export configurations to a trained model.
- Parameters:
model_train_id (str) – The ID of the trained model.
export_formats (list) – The list of formats to export the model.
model_config (dict) – The configuration settings for the model export.
is_gpu_required (bool, optional) – Flag to indicate if GPU is required for the export (default is False).
- Returns:
A tuple containing: - An InferenceOptimization object related to the model export. - An Action object related to the export process.
- Return type:
tuple
Example
>>> inference_opt, action = project.add_model_export("model123", ["format1", "format2"], {"configKey": "configValue"}, is_gpu_required=True) >>> if action: >>> print(f"Model export added: {inference_opt}") >>> else: >>> print(f"Error: {inference_opt}")
- delete()[source]#
Delete a project by project ID.
- Returns:
A tuple containing: - A success message if the project is deleted successfully. - An error message if the deletion fails.
- Return type:
tuple
Example
>>> success_message, error = project.delete() >>> if error: >>> print(f"Error: {error}") >>> else: >>> print(success_message)
- get_actions_logs(action_id)[source]#
Fetch action logs for a specific action.
- Parameters:
action_id (str) – The ID of the action for which logs are to be fetched.
- Returns:
A tuple containing: - The action logs if the request is successful. - An error message if the request fails.
- Return type:
tuple
Example
>>> logs, error = project.get_actions_logs_for_action("action123") >>> if error: >>> print(f"Error: {error}") >>> else: >>> print(f"Action logs: {logs}")
- get_annotation(dataset_id=None, annotation_id=None, annotation_name='')[source]#
Get an Annotation instance.
- Parameters:
dataset_id (str, optional) – The ID of the dataset associated with the annotation.
annotation_id (str, optional) – The ID of the annotation.
annotation_name (str, optional) – The name of the annotation.
- Returns:
An Annotation instance with the specified dataset ID, annotation ID, and/or name.
- Return type:
Example
>>> annotation = project.get_annotation(annotation_id="annotation123") >>> print(annotation)
- get_annotations_status_summary()[source]#
Get the annotations status summary for the project.
- Returns:
An ordered dictionary with annotations status and their counts.
- Return type:
OrderedDict
Example
>>> annotations_status = project.get_annotations_status_summary() >>> print(annotations_status)
- get_dataset(dataset_id=None, dataset_name='')[source]#
Get a Dataset instance.
- Parameters:
dataset_id (str, optional) – The ID of the dataset.
dataset_name (str, optional) – The name of the dataset.
- Returns:
A Dataset instance with the specified ID and/or name.
- Return type:
Example
>>> dataset = project.get_dataset(dataset_id="dataset123") >>> print(dataset)
- get_dataset_status_summary()[source]#
Get the dataset status summary for the project.
- Returns:
An ordered dictionary with dataset status and their counts.
- Return type:
OrderedDict
Example
>>> dataset_status = project.get_dataset_status_summary() >>> print(dataset_status)
- get_deployment(deployment_id=None, deployment_name='')[source]#
Get a Deployment instance.
- Parameters:
deployment_id (str, optional) – The ID of the deployment.
deployment_name (str, optional) – The name of the deployment.
- Returns:
A Deployment instance with the specified ID and/or name.
- Return type:
Example
>>> deployment = project.get_deployment(deployment_id="deployment123") >>> print(deployment)
- get_deployment_status_summary()[source]#
Get the deployment status summary for the project.
- Returns:
An ordered dictionary with deployment status and their counts.
- Return type:
OrderedDict
Example
>>> deployment_status = project.get_deployment_status_summary() >>> print(deployment_status)
- get_experiment(experiment_id=None, experiment_name='')[source]#
Get an Experiment instance.
- Parameters:
experiment_id (str, optional) – The ID of the experiment.
experiment_name (str, optional) – The name of the experiment.
- Returns:
An Experiment instance with the specified ID and/or name.
- Return type:
Example
>>> experiment = project.get_experiment(experiment_id="experiment123") >>> print(experiment)
- get_exported_model(model_export_id=None, model_export_name='')[source]#
Get an InferenceOptimization instance.
- Parameters:
model_export_id (str, optional) – The ID of the model export.
model_export_name (str, optional) – The name of the model export.
- Returns:
An InferenceOptimization instance with the specified ID and/or name.
- Return type:
InferenceOptimization
Example
>>> inference_optimization = project.get_inference_optimization(model_export_id="export123") >>> print(inference_optimization)
- get_exported_models()[source]#
Fetch all model exports for the project.
- Returns:
A tuple containing: - The model export data if the request is successful. - An error message if the request fails.
- Return type:
tuple
Example
>>> model_exports, error = project.get_model_exports() >>> if error: >>> print(f"Error: {error}") >>> else: >>> print(f"Model exports: {model_exports}")
- get_latest_action_record(service_id)[source]#
Fetch the latest action logs for a specific service ID.
- Parameters:
service_id (str) – The ID of the service for which to fetch the latest action logs.
- Returns:
A tuple containing: - The response dictionary from the API. - An error message if the response indicates an error, or None if successful. - A status message describing the result of the operation.
- Return type:
tuple
Example
>>> result, error, message = project.get_latest_action_record("service123") >>> if error: >>> print(f"Error: {error}") >>> else: >>> print(f"Status: {message}")
- get_model(model_id=None, model_name='')[source]#
Get a Model instance.
- Parameters:
model_id (str, optional) – The ID of the model.
model_name (str, optional) – The name of the model.
- Returns:
A Model instance with the specified ID and/or name.
- Return type:
Example
>>> model = project.get_model(model_id="model123") >>> print(model)
- get_model_export_status_summary()[source]#
Get the model export status summary for the project.
- Returns:
An ordered dictionary with model export status and their counts.
- Return type:
OrderedDict
Example
>>> model_export_status = project.get_model_export_status_summary() >>> print(model_export_status)
- get_model_status_summary()[source]#
Get the model status summary for the project.
- Returns:
An ordered dictionary with model status and their counts.
- Return type:
OrderedDict
Example
>>> model_status = project.get_model_status_summary() >>> print(model_status)
- get_service_action_logs(service_id, service_name)[source]#
Fetch action logs for a specific service.
- Parameters:
service_id (str) – The ID of the service for which to fetch action logs.
service_name (str) – The name of the service for which to fetch action logs.
- Returns:
A tuple containing: - The response dictionary from the API. - An error message if the response indicates an error, or None if successful. - A status message describing the result of the operation.
- Return type:
tuple
Example
>>> result, error, message = project.get_service_action_logs("service123") >>> if error: >>> print(f"Error: {error}") >>> else: >>> print(f"Status: {message}")
- import_cloud_dataset(dataset_name, source, source_url, cloud_provider, dataset_type, input_type='image', dataset_description='', version_description='', credential_alias='', bucket_alias='', compute_alias='', source_credential_alias='', bucket_alias_service_provider='auto')[source]#
Upload a cloud dataset.
- Parameters:
dataset_name (str) – The name of the dataset.
source (str) – The source of the dataset.
source_url (str) – The URL of the source.
cloud_provider (str) – The cloud provider for the dataset.
dataset_type (str) – The type of the dataset.
input_type (str, optional) – The input type for the dataset (default is “image”).
dataset_description (str, optional) – A description of the dataset (default is an empty string).
version_description (str, optional) – A description of the dataset version (default is an empty string).
credential_alias (str, optional) – The credential alias for accessing the dataset (default is an empty string).
bucket_alias (str, optional) – The bucket alias for the dataset (default is an empty string).
compute_alias (str, optional) – The compute alias (default is an empty string).
source_credential_alias (str, optional) – The source credential alias (default is an empty string).
bucket_alias_service_provider (str, optional) – The bucket alias service provider (default is “auto”).
- Returns:
A tuple containing: - A Dataset object for the created dataset. - An Action object related to the dataset upload process.
- Return type:
tuple
Example
>>> dataset, action = project.upload_cloud_dataset("MyCloudDataset", "cloud_source", "http://source.url", "AWS", "image") >>> if action: >>> print(f"Dataset uploaded: {dataset}") >>> else: >>> print(f"Error: {dataset}")
- import_local_dataset(dataset_name, file_path, dataset_type, source='lu', dataset_description='', version_description='', input_type='image', credential_alias='', bucket_alias='', compute_alias='', source_credential_alias='', bucket_alias_service_provider='auto')[source]#
Upload a local dataset.
- Parameters:
dataset_name (str) – The name of the dataset.
file_path (str) – The path to the local file.
dataset_type (str) – The type of the dataset.
source (str, optional) – The source of the dataset (default is “lu”).
dataset_description (str, optional) – A description of the dataset (default is an empty string).
version_description (str, optional) – A description of the dataset version (default is an empty string).
input_type (str, optional) – The input type for the dataset (default is “image”).
credential_alias (str, optional) – The credential alias for accessing the dataset (default is an empty string).
bucket_alias (str, optional) – The bucket alias for the dataset (default is an empty string).
compute_alias (str, optional) – The compute alias (default is an empty string).
source_credential_alias (str, optional) – The source credential alias (default is an empty string).
bucket_alias_service_provider (str, optional) – The bucket alias service provider (default is “auto”).
- Returns:
A tuple containing: - A Dataset object for the created dataset. - An Action object related to the dataset upload process.
- Return type:
tuple
Example
>>> dataset, action = project.upload_local_dataset("MyLocalDataset", "path/to/data.csv", "image") >>> if action: >>> print(f"Dataset uploaded: {dataset}") >>> else: >>> print(f"Error: {dataset}")
- invite_user_to_project(email, permissions)[source]#
Invite a user to the current project with specific permissions.
This function sends an invitation to a user, identified by their email address, to join the specified project. The user will be assigned the provided permissions for different project services.
- Parameters:
email (str) – The email address of the user to invite.
permissions (dict) – A dictionary specifying the permissions for various project services.
- Returns:
A tuple containing three elements: - API response (dict): The raw response from the API. - error_message (str or None): Error message if an error occurred, None otherwise. - status_message (str): A status message indicating success or failure.
- Return type:
tuple
Example
>>> email = "ashray.gupta@matrice.ai" >>> permissions = { ... 'datasetsService': { ... 'read': True, ... 'write': True, ... 'admin': True ... }, ... 'annotationService': { ... 'read': True, ... 'write': False, ... 'admin': False ... }, ... 'modelsService': { ... 'read': True, ... 'write': False, ... 'admin': False ... }, ... 'inferenceService': { ... 'read': True, ... 'write': False, ... 'admin': False ... }, ... 'deploymentService': { ... 'read': True, ... 'write': True, ... 'admin': False ... }, ... 'byomService': { ... 'read': True, ... 'write': False, ... 'admin': False ... } ... } >>> resp, err, msg = project.invite_user_to_project(email, permissions) >>> if err: ... print(f"Error: {err}") ... else: ... print("User invited successfully")
- list_annotations(page_size=10, page_number=0)[source]#
List all annotations in the project.
- Returns:
A tuple containing: - A list of annotations if the request is successful. - An error message if the request fails.
- Return type:
tuple
Example
>>> annotations, error = project.list_annotations() >>> if error: >>> print(f"Error: {error}") >>> else: >>> print(f"Annotations: {annotations}")
- list_collaborators()[source]#
List all collaborators associated with the current project along with the permissions.
This function retrieves a list of all collaborators for the specified project ID.
- Returns:
A tuple containing three elements: - API response (dict): The raw response from the API. - error_message (str or None): Error message if an error occurred, None otherwise. - status_message (str): A status message indicating success or failure.
- Return type:
tuple
Example
>>> resp, err, msg =project.list_collaborators() >>> if err: >>> print(f"Error: {err}") >>> else: >>> print(f"Collaborators : {resp}")
- list_datasets(status='total', page_size=10, page_number=0)[source]#
List all datasets in the project.
- Returns:
A tuple containing: - A list of datasets if the request is successful. - An error message if the request fails.
- Return type:
tuple
Example
>>> datasets, error = project.list_datasets() >>> if error: >>> print(f"Error: {error}") >>> else: >>> print(f"Datasets: {datasets}")
- list_deployments(page_size=10, page_number=0)[source]#
List all deployments inside the project.
- Returns:
A tuple containing: - A list of deployments if the request is successful. - An error message if the request fails.
- Return type:
tuple
Example
>>> deployments, error = project.list_deployments() >>> if error: >>> print(f"Error: {error}") >>> else: >>> print(f"Deployments: {deployments}")
- list_drift_monitorings(page_size=10, page_number=0)[source]#
Fetch a list of all drift monitorings.
- Returns:
A tuple containing three elements: - API response (dict): The raw response from the API. - error_message (str or None): Error message if an error occurred, None otherwise. - status_message (str): A status message indicating success or failure.
- Return type:
tuple
Example
>>> resp, err, msg = projects.list_drift_monitorings() >>> if err: >>> print(f"Error: {err}") >>> else: >>> print(f"Drift Monitoring detail : {resp}")
- list_experiments(page_size=10, page_number=0)[source]#
List all experiments in the project.
- Returns:
A tuple containing: - A list of experiments if the request is successful. - An error message if the request fails.
- Return type:
tuple
- list_exported_models(page_size=10, page_number=0)[source]#
List all exported models in the project.
- Returns:
A tuple containing: - A list of exported models if the request is successful. - An error message if the request fails.
- Return type:
tuple
Example
>>> exported_models, error = project.list_exported_models() >>> if error: >>> print(f"Error: {error}") >>> else: >>> print(f"Exported models: {exported_models}")
- list_model_train(page_size=10, page_number=0)[source]#
List model training sessions in the project with pagination.
- Returns:
A tuple containing: - A paginated list of model training sessions if the request is successful. - An error message if the request fails.
- Return type:
tuple
Example
>>> model_train_sessions, error = project.list_model_train() >>> if error: >>> print(f"Error: {error}") >>> else: >>> print(f"Model training sessions: {model_train_sessions}")
- update_permissions(collaborator_id, permissions)[source]#
Update the permissions for a collaborator in the current project.
This function updates the permissions for a specified collaborator in the current project.
- Parameters:
collaborator_id (str) – The ID of the collaborator whose permissions are to be updated.
permissions (list) – A list containing the updated permissions for various project services.
- Returns:
A tuple containing three elements: - API response (dict): The raw response from the API. - error_message (str or None): Error message if an error occurred, None otherwise. - status_message (str): A status message indicating success or failure.
- Return type:
tuple
Example
>>> collaborator_id = "12345" >>> permissions = [ ... "v1.0", ... True, # isProjectAdmin ... {"read": True, "write": True, "admin": False}, # datasetsService ... {"read": True, "write": False, "admin": False}, # modelsService ... {"read": True, "write": False, "admin": False}, # annotationService ... {"read": True, "write": False, "admin": False}, # byomService ... {"read": True, "write": True, "admin": False}, # deploymentService ... {"read": True, "write": False, "admin": False}, # inferenceService ... ] >>> resp, err, msg = project.update_permissions(collaborator_id, permissions) >>> if err: ... print(f"Error: {err}") ... else: ... print("Permissions updated successfully")