## Viewing attachments
You can preview most images, audio files, videos, or PDFs in the Braintrust UI. You can also download any file to view it locally.
We provide built-in support to preview attachments directly in playground input cells and traces.
In the playground, you can preview attachments in an inline embedded view for easy visual verification during experimentation:
In the trace pane, attachments appear as an additional list under the data viewer:
---
file: ./content/docs/guides/datasets.mdx
meta: {
"title": "Datasets"
}
# Datasets
Datasets allow you to collect data from production, staging, evaluations, and even manually, and then
use that data to run evaluations and track improvements over time.
For example, you can use Datasets to:
* Store evaluation test cases for your eval script instead of managing large JSONL or CSV files
* Log all production generations to assess quality manually or using model graded evals
* Store user reviewed (
### Max concurrency
The maximum number of tasks/scorers that will be run concurrently in the playground. This is useful for avoiding rate limits (429 - Too many requests) from AI providers.
### Strict variables
When this option is enabled, evaluations will fail if the dataset row does not include all of the variables referenced in prompts.
## Collaboration
Playgrounds are designed for collaboration and automatically synchronize in real-time.
To share a playground, copy the URL and send it to your collaborators. Your collaborators
must be members of your organization to view the playground. You can invite users from the settings page.
## Reasoning
Any headers you add to the configuration will be passed through in the request to the custom endpoint.
The values of the headers can also be templated using Mustache syntax.
Currently, the supported template variables are `{{email}}` and `{{model}}`.
which will be replaced with the email of the user whom the Braintrust API key belongs to and the model name, respectively.
If the endpoint is non-streaming, set the `Endpoint supports streaming` flag to false. The proxy will
convert the response to streaming format, allowing the models to work in the playground.
Each custom model must have a flavor (`chat` or `completion`) and format (`openai`, `anthropic`, `google`, `window` or `js`). Additionally, they can
optionally have a boolean flag if the model is multimodal and an input cost and output cost, which will only be used to calculate and display estimated
prices for experiment runs.
#### Specifying an org
If you are part of multiple organizations, you can specify which organization to use by passing the `x-bt-org-name`
header in the SDK:
### Configuration options
Specify the following for your custom provider.
* **Provider name**: A unique name for your custom provider
* **Model name**: The name of your custom model (e.g., `gpt-3.5-acme`, `my-custom-llama`)
* **Endpoint URL**: The API endpoint for your custom model
* **Format**: The API format (`openai`, `anthropic`, `google`, `window`, or `js`)
* **Flavor**: Whether it's a `chat` or `completion` model (default: `chat`)
* **Headers**: Any custom headers required for authentication or configuration
### Custom headers and templating
Any headers you add to the configuration are passed through in the request to the custom endpoint. The values of the headers can be templated using Mustache syntax with these supported variables:
* `{{email}}`: Email of the user associated with the Braintrust API key
* `{{model}}`: The model name being requested
Example header configuration:
```
Authorization: Bearer {{api_key}}
X-User-Email: {{email}}
X-Model: {{model}}
```
### Streaming support
If your endpoint doesn't support streaming natively, set the "Endpoint supports streaming" flag to false. Braintrust will automatically convert the response to streaming format, allowing your models to work in the playground and other streaming contexts.
### Model metadata
You can optionally specify:
* **Multimodal**: Whether the model supports multimodal inputs
* **Input cost**: Cost per million input tokens (for experiment cost estimation)
* **Output cost**: Cost per million output tokens (for experiment cost estimation)