Skip to content

Error Handling

At some point you will run into a failed batch or another issue. This guide covers the most common problems and how to fix them.

Start by checking the batch log. It will tell you why the batch failed.

Most failures come down to one of two causes:

  • The batch hit the request limit too often
  • The batch settings are incompatible with the selected endpoint or model

Check your RPM (requests per minute) and TPM (tokens per minute) limits on your provider’s dashboard. Set the RPM in your batch settings to match. If you are processing large files, lower it further.

You can also enable Intelligent Retry, which lets the system adjust the RPM automatically when it detects limit errors.

If your batch does not need to finish by a specific time, consider using Provider Batches. This offloads processing to the provider’s side and usually completes within 24 hours — typically at 50% lower cost.

Not all settings work with every provider or model. The app currently does not warn you about incompatible combinations before you start a batch. Check the list below before running one.

SettingLimitation
File Upload (instead of a file reader)Only supported by Gemini, OpenAI, and Anthropic
Provider BatchesOnly supported by Anthropic and OpenAI. Does not work together with File Upload
Structured JSON outputNot supported by some open source models / plattforms
Structured JSON outputModels that support it often require the word json to appear at least once in the prompt