K

Fine Tunes

get
/fine-tunes

Response

JSON
{
  "object": "string",
  "data": [
    {
      "id": "string",
      "object": "string",
      "created_at": 0,
      "updated_at": 0,
      "model": "string",
      "fine_tuned_model": "string",
      "organization_id": "string",
      "status": "string",
      "hyperparams": {},
      "training_files": [
        {
          "id": "string",
          "object": "string",
          "bytes": 0,
          "created_at": 0,
          "filename": "string",
          "purpose": "string",
          "status": "string",
          "status_details": {}
        }
      ],
      "validation_files": [
        {
          "id": "string",
          "object": "string",
          "bytes": 0,
          "created_at": 0,
          "filename": "string",
          "purpose": "string",
          "status": "string",
          "status_details": {}
        }
      ],
      "result_files": [
        {
          "id": "string",
          "object": "string",
          "bytes": 0,
          "created_at": 0,
          "filename": "string",
          "purpose": "string",
          "status": "string",
          "status_details": {}
        }
      ],
      "events": [
        {
          "object": "string",
          "created_at": 0,
          "level": "string",
          "message": "string"
        }
      ]
    }
  ]
}

post
/fine-tunes

Body params

Name
training_file
Type
string
Required
Description
The ID of an uploaded file that contains training data. See [upload file](/docs/api-reference/files/upload) for how to upload a file. Your dataset must be formatted as a JSONL file, where each training example is a JSON object with the keys "prompt" and "completion". Additionally, you must upload your file with the purpose `fine-tune`. See the [fine-tuning guide](/docs/guides/fine-tuning/creating-training-data) for more details.
Name
validation_file
Type
string
Description
The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the [fine-tuning results file](/docs/guides/fine-tuning/analyzing-your-fine-tuned-model). Your train and validation data should be mutually exclusive. Your dataset must be formatted as a JSONL file, where each validation example is a JSON object with the keys "prompt" and "completion". Additionally, you must upload your file with the purpose `fine-tune`. See the [fine-tuning guide](/docs/guides/fine-tuning/creating-training-data) for more details.
Name
model
Type
string
Description
The name of the base model to fine-tune. You can select one of "ada", "babbage", "curie", "davinci", or a fine-tuned model created after 2022-04-21. To learn more about these models, see the [Models](https://platform.openai.com/docs/models) documentation.
Name
n_epochs
Type
integer
Description
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
Name
batch_size
Type
integer
Description
The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. By default, the batch size will be dynamically configured to be ~0.2% of the number of examples in the training set, capped at 256 - in general, we've found that larger batch sizes tend to work better for larger datasets.
Name
learning_rate_multiplier
Type
number
Description
The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pretraining multiplied by this value. By default, the learning rate multiplier is the 0.05, 0.1, or 0.2 depending on final `batch_size` (larger learning rates tend to perform better with larger batch sizes). We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results.
Name
prompt_loss_weight
Type
number
Description
The weight to use for loss on the prompt tokens. This controls how much the model tries to learn to generate the prompt (as compared to the completion which always has a weight of 1.0), and can add a stabilizing effect to training when completions are short. If prompts are extremely long (relative to completions), it may make sense to reduce this weight so as to avoid over-prioritizing learning the prompt.
Name
compute_classification_metrics
Type
boolean
Description
If set, we calculate classification-specific metrics such as accuracy and F-1 score using the validation set at the end of every epoch. These metrics can be viewed in the [results file](/docs/guides/fine-tuning/analyzing-your-fine-tuned-model). In order to compute classification metrics, you must provide a `validation_file`. Additionally, you must specify `classification_n_classes` for multiclass classification or `classification_positive_class` for binary classification.
Name
classification_n_classes
Type
integer
Description
The number of classes in a classification task. This parameter is required for multiclass classification.
Name
classification_positive_class
Type
string
Description
The positive class in binary classification. This parameter is needed to generate precision, recall, and F1 metrics when doing binary classification.
Name
classification_betas
Type
array[object]
Description
If this is provided, we calculate F-beta scores at the specified beta values. The F-beta score is a generalization of F-1 score. This is only used for binary classification. With a beta of 1 (i.e. the F-1 score), precision and recall are given the same weight. A larger beta score puts more weight on recall and less on precision. A smaller beta score puts more weight on precision and less on recall.
Name
suffix
Type
string
Description
A string of up to 40 characters that will be added to your fine-tuned model name. For example, a `suffix` of "custom-model-name" would produce a model name like `ada:ft-your-org:custom-model-name-2022-02-15-04-21-04`.

Response

JSON
{
  "id": "string",
  "object": "string",
  "created_at": 0,
  "updated_at": 0,
  "model": "string",
  "fine_tuned_model": "string",
  "organization_id": "string",
  "status": "string",
  "hyperparams": {},
  "training_files": [
    {
      "id": "string",
      "object": "string",
      "bytes": 0,
      "created_at": 0,
      "filename": "string",
      "purpose": "string",
      "status": "string",
      "status_details": {}
    }
  ],
  "validation_files": [
    {
      "id": "string",
      "object": "string",
      "bytes": 0,
      "created_at": 0,
      "filename": "string",
      "purpose": "string",
      "status": "string",
      "status_details": {}
    }
  ],
  "result_files": [
    {
      "id": "string",
      "object": "string",
      "bytes": 0,
      "created_at": 0,
      "filename": "string",
      "purpose": "string",
      "status": "string",
      "status_details": {}
    }
  ],
  "events": [
    {
      "object": "string",
      "created_at": 0,
      "level": "string",
      "message": "string"
    }
  ]
}

get
/fine-tunes/{fine_tune_id}

Path parameters

  • Name
    fine_tune_id
    Type
    string
    Required
    Description

    The ID of the fine-tune job

Response

JSON
{
  "id": "string",
  "object": "string",
  "created_at": 0,
  "updated_at": 0,
  "model": "string",
  "fine_tuned_model": "string",
  "organization_id": "string",
  "status": "string",
  "hyperparams": {},
  "training_files": [
    {
      "id": "string",
      "object": "string",
      "bytes": 0,
      "created_at": 0,
      "filename": "string",
      "purpose": "string",
      "status": "string",
      "status_details": {}
    }
  ],
  "validation_files": [
    {
      "id": "string",
      "object": "string",
      "bytes": 0,
      "created_at": 0,
      "filename": "string",
      "purpose": "string",
      "status": "string",
      "status_details": {}
    }
  ],
  "result_files": [
    {
      "id": "string",
      "object": "string",
      "bytes": 0,
      "created_at": 0,
      "filename": "string",
      "purpose": "string",
      "status": "string",
      "status_details": {}
    }
  ],
  "events": [
    {
      "object": "string",
      "created_at": 0,
      "level": "string",
      "message": "string"
    }
  ]
}

post
/fine-tunes/{fine_tune_id}/cancel

Path parameters

  • Name
    fine_tune_id
    Type
    string
    Required
    Description

    The ID of the fine-tune job to cancel

Response

JSON
{
  "id": "string",
  "object": "string",
  "created_at": 0,
  "updated_at": 0,
  "model": "string",
  "fine_tuned_model": "string",
  "organization_id": "string",
  "status": "string",
  "hyperparams": {},
  "training_files": [
    {
      "id": "string",
      "object": "string",
      "bytes": 0,
      "created_at": 0,
      "filename": "string",
      "purpose": "string",
      "status": "string",
      "status_details": {}
    }
  ],
  "validation_files": [
    {
      "id": "string",
      "object": "string",
      "bytes": 0,
      "created_at": 0,
      "filename": "string",
      "purpose": "string",
      "status": "string",
      "status_details": {}
    }
  ],
  "result_files": [
    {
      "id": "string",
      "object": "string",
      "bytes": 0,
      "created_at": 0,
      "filename": "string",
      "purpose": "string",
      "status": "string",
      "status_details": {}
    }
  ],
  "events": [
    {
      "object": "string",
      "created_at": 0,
      "level": "string",
      "message": "string"
    }
  ]
}

get
/fine-tunes/{fine_tune_id}/events

Path parameters

  • Name
    fine_tune_id
    Type
    string
    Required
    Description

    The ID of the fine-tune job to get events for.

  • Name
    stream
    Type
    boolean
    Description

    Whether to stream events for the fine-tune job. If set to true, events will be sent as data-only server-sent events as they become available. The stream will terminate with a data: [DONE] message when the job is finished (succeeded, cancelled, or failed).

    If set to false, only events generated so far will be returned.

Response

JSON
{
  "object": "string",
  "data": [
    {
      "object": "string",
      "created_at": 0,
      "level": "string",
      "message": "string"
    }
  ]
}