ModelFront docs

ModelFront API docs

How to use the ModelFront API

The ModelFront API predicts the quality of machine translations.

The REST API returns sentence level quality scores.

Base endpoint URL:
Versions: v1
Methods: predict, languages, models

It supports batching. If you request an engine and a translation is not included, the API automatically gets the translation from that engine.

To get your API access token and code samples, create an account at and click on API.


The main ModelFront API path is /predict.

HTTP request

POST{ sl }&tl={ tl }&token={ token }&model={ model }&metadata_id={ metadata_id }
sl string required The source language code
tl string required The target language code
token string required Your API access token
model string optional Your custom model ID with which to perform quality prediction
engine string optional The engine with which to translate if no translation is provided
custom_engine_id string optional The custom engine with which to translate.
engine_key string optional The engine key to use. This is required for some custom engines.
metadata_id integer optional The metadata ID that specified model accepts. The metadata is used to pass certain meta information to the model. The list of possible metadatas with their IDs can be obtained from the /models endpoint

sl and tl should be among the ModelFront supported languages. model should be the identifier of your custom model. engine should be from the ModelFront supported engines. Use /models to get the supported metadata_id values for the model.

Request body

  "rows": [ 
      "original": string,
      "translation": string,
      "id": string,
rows {row}[] required The list of row objects
{row}.original string required The original text
{row}.translation string optional The translated text to be scored
{row}.id string optional ID of the row
metadata {} optional (deprecated) The metadata value

For optimal performance, requests should send batches of multiple rows. A single request can include up to 30 rows.

For optimal predictions, every row should include at most one full sentence and no more than 500 characters in original and translation.

Response body

If there is no error, then the response will contain a status and rows. The rows correspond to the request rows and they will include the row ID if the ID was provided in the request. A translation is included if it was requested.

  "status": "ok",
  "rows": [
      "translation": string
      "quality": number
      "risk": number,
      "id": string,

A quality is a floating point number with a value from 0.0 to 1.0. It can be parsed by JavaScript parseFloat() or Python float().

risk is equivalent to quality subtracted from 1.0.


The ModelFront API path /languages returns a list of supported languages. 🌍

It provides a map of language codes to language names.

You can even hit it in your browser at

Example response body

    "status": "ok",
    "languages": {
      "af": "Afrikaans",
      "sq": "Albanian",
      "am": "Amharic",
      "ar": "Arabic",
      "hy": "Armenian",
      "as": "Assamese",
      "az": "Azerbaijani",
      "ba": "Bashkir",
      "eu": "Basque",
      "be": "Belarusian",
      "bn": "Bengali",
      "bs": "Bosnian",
      "bg": "Bulgarian",
      "my": "Burmese",
      "yue": "Cantonese",
      "ca": "Catalan",
      "ceb": "Cebuano",
      "zh": "Chinese",
      "zh-Hans": "Chinese | Simplified",
      "zh-Hant": "Chinese | Traditional",
      "co": "Corsican",
      "hr": "Croatian",
      "cs": "Czech",
      "da": "Danish",
      "prs": "Dari",
      "dv": "Divehi",
      "nl": "Dutch",
      "en": "English",
      "en-gb": "English | Great Britain",
      "en-us": "English | United States",
      "et": "Estonian",
      "fj": "Fijian",
      "fi": "Finnish",
      "fr": "French",
      "fr-ca": "French | Canada",
      "fr-fr": "French | France",
      "fr-ch": "French | Switzerland",
      "fy": "Frisian",
      "gl": "Galician",
      "ka": "Georgian",
      "de": "German",
      "de-at": "German | Austria",
      "de-de": "German | Germany",
      "de-ch": "German | Switzerland",
      "el": "Greek",
      "gu": "Gujarati",
      "ht": "Haitian",
      "ha": "Hausa",
      "haw": "Hawaiian",
      "he": "Hebrew",
      "hi": "Hindi",
      "hmn": "Hmong",
      "hu": "Hungarian",
      "is": "Icelandic",
      "ig": "Igbo",
      "id": "Indonesian",
      "iu": "Inuktitut",
      "ga": "Irish Gaelic",
      "it": "Italian",
      "it-it": "Italian | Italy",
      "it-ch": "Italian | Switzerland",
      "ja": "Japanese",
      "jv": "Javanese",
      "kn": "Kannada",
      "kk": "Kazakh",
      "km": "Khmer",
      "rw": "Kinyarwanda",
      "ko": "Korean",
      "kmr": "Kurdish | Kurmanji",
      "ckb": "Kurdish | Sorani",
      "ky": "Kyrgyz",
      "lo": "Lao",
      "la": "Latin",
      "lv": "Latvian",
      "lt": "Lithuanian",
      "lmo": "Lombard",
      "nds": "Low Saxon",
      "lb": "Luxembourgish",
      "mk": "Macedonian",
      "mg": "Malagasy",
      "ms": "Malay",
      "ml": "Malayalam",
      "mt": "Maltese",
      "mi": "Maori",
      "mr": "Marathi",
      "yua": "Mayan | Yucatec",
      "mn": "Mongolian",
      "ne": "Nepalese",
      "no": "Norwegian",
      "nb": "Norwegian | Bokmål",
      "nn": "Norwegian | Nynorsk",
      "ny": "Nyanja",
      "or": "Oriya",
      "otq": "Otomi | Querétaro",
      "ps": "Pashto",
      "fa": "Persian",
      "pl": "Polish",
      "pt": "Portuguese",
      "pt-br": "Portuguese | Brazil",
      "pt-pt": "Portuguese | Portugal",
      "pa": "Punjabi",
      "ro": "Romanian",
      "ru": "Russian",
      "sm": "Samoan",
      "gd": "Scots Gaelic",
      "sr": "Serbian",
      "sr-Cyrl": "Serbian | Cyrillic",
      "sr-Latn": "Serbian | Latin",
      "st": "Sesotho",
      "sn": "Shona",
      "sd": "Sindhi",
      "si": "Sinhalese",
      "sk": "Slovak",
      "sl": "Slovenian",
      "so": "Somali",
      "dsb": "Sorbian | Lower",
      "hsb": "Sorbian | Upper",
      "es": "Spanish",
      "es-419": "Spanish | Latin America",
      "es-es": "Spanish | Spain",
      "su": "Sundanese",
      "sw": "Swahili",
      "sv": "Swedish",
      "tl": "Tagalog",
      "ty": "Tahitian",
      "tg": "Tajik",
      "ta": "Tamil",
      "tt": "Tatar",
      "te": "Telugu",
      "th": "Thai",
      "bo": "Tibetan",
      "ti": "Tigrinya",
      "to": "Tongan",
      "tr": "Turkish",
      "tk": "Turkmen",
      "uk": "Ukrainian",
      "ur": "Urdu",
      "ug": "Uyghur",
      "uz": "Uzbek",
      "uz-Cyrl": "Uzbek | Cyrillic",
      "uz-Latn": "Uzbek | Latin",
      "vi": "Vietnamese",
      "cy": "Welsh",
      "xh": "Xhosa",
      "yi": "Yiddish",
      "yo": "Yoruba",
      "zu": "Zulu"

The response is not the full list of valid languages codes and locales. The ModelFront API is smart enough to handle many variants for supported languages.

⚠️ Some machine translation APIs and translation management systems use other codes or other default locales for these languages or language groups, like Chinese, Norwegian, Kurdish and Uzbek, Serbian, Hmong, Tagalog (Filipino), Dari, Otomi and Maya.

Language codes

A language code must be a valid ISO 639-1 code or ISO 639-2 code.

For example, for English, the correct code is en, and the 3-letter code eng is equivalent to en.

For languages like Cebuano or Alemannic, there is no ISO 639-1 code, so you must use the ISO-639-2 code, like ceb or als.

⚠️ Do not use non-standard codes, like spa for Spanish.

Locales and variants

For most languages, the locale or variant is reduced to the raw language code for the purposes of quality prediction.

For example, en-GB and en-ZA are equivalent to en.

There are two main exceptions:

  1. If the request does not include the translation and instead includes the engine option, then the locale will be passed to the machine translation engine if it supports that locale. For example, DeepL supports en-UK and pt-BR.

  2. If the language is Chinese, then the two major variants are two totally separate target languages. You can select the Traditional script with zh-Hant or with the locales zh-tw, zh-hk or zh-mo. The default script is the Simplified script, so the language code zh (with no locale) or zh-us is equivalent to zh-Hans. The script code Hant or Hans takes precedence over the country code.

⚠️ cmn is not supported, because Mandarin Chinese is a spoken language.

⚠️ Do not use non-standard locales, like es-LA for Latin American Spanish, or pa-PA for Punjabi.

More languages

ModelFront supports more than a hundred languages. If a language is unsupported, you can try the codes of related languages or macrolanguages that are supported, or use und.


The ModelFront API path /models returns the list of available models for your account.

HTTP request

GET{ token }
token string required Your access token

Response body

If there is no error, then the response will contain the status and models - the list of the models the token has access to. Each object in the models list will have additional details, like the model's ID, name and metadata.

  "status": "ok",
  "models": [
      "id": string,
      "name": string,
      "metadata": [
          "id": integer,
          "name": string
status string The request status
models {model}[] The list of models the token has access to
{model}.id string The model ID
{model}.name string The model name
{model}.metadata {metadata}[] The list of model's metadata objects
{model}.{metadata}.id integer The ID of the metadata object
{model}.{metadata}.name string the metadata name

In particular, the response contains the list of metadata objects supported by each model. The metadata IDs are non-negative integers that should be passed to /predict requests to specify e.g. the content type of the rows for the specific model.

You can get the identifier and deployment state of your custom models in the console API tab.

If no model is passed to /predict, the default base model is used.


You can get the metadata for your model in the object returned by /models. It contains a list of metadata objects each of which has the id and name of the metadata (see the /models response body above).

If your model supports metadata, then metadata_id parameter should be passed to /predict. It should be one of the metadata ids returned by /models.

For backwards compatibility, there is support for passing the actual metadata value in the request body as well. Note that metadata_id takes precedence over the metadata value passed in the request body.


Optionally, you can select a machine translation engine to have the translations filled in for you.

Supported engines
Google google Custom translation with custom_engine_id and engine_key
Microsoft microsoft Custom translation with custom_engine_id
DeepL deepl
No custom translation supported
ModernMT modernmt Custom translation with custom_engine_id and engine_key
Let ModelFront choose… *

You will be billed for using the machine translation engine.

If you want to use your own key for billing instead, include engine_key.

If translation is already included in a row, the engine will not be used.


In the case of invalid path or malformed request, the default FastAPI validation error handling will return an HTTP status code in the range 4xx and a response of the form:

{ "detail": ... }

In case of an error in the request values or on the server, the ModelFront API returns a FastAPI UJSONResponse with a specific HTTP status code and a response of the form:

  "status": "error",
  "message": "..."

Status codes

Successful response

200 Successful response

Request errors

400 When the body is malformed or the parameters like sl, tl, engine, model and metadata_id' are missing, invalid or in an invalid combination

401 When the authentication token is missing or invalid

402 When a payment or contract is required

419 When the requested model is unavailable, typically because it is undeployed

429 When there are too many requests from the same IP address

424 When the external machine translation API for the requested engine has returned an error

Server errors

503 When a model, including the default model, is temporarily unavailable

500 When there is any other error

Example code

First set an environment variable with your token on your local system.

export MODELFRONT_TOKEN=<your access token>

Don't have a token? Sign up to the console!

Then send a request.


curl \
  --header "Content-Type: application/json" --request POST \
  --data '{ "rows": [ {"original": "This is not a test.", "translation": "Esto no es una prueba.", "id": "1"} ] }' \


// npm install request
const util = require('util')
const request = util.promisify(require("request"));

const url = `${ process.env.MODELFRONT_TOKEN }`;
const body = {
  rows: [ 
      original: "This is not a test.",
      translation: "Esto no es una prueba.",
      id: "1"

(async () => {

    const response = await request({
        method: 'POST',
        headers: { 'Content-Type': 'application/json', 'Accept-Charset': 'utf-8' },
        json: true,


The response you receive should be:

  "status": "ok",
  "rows": [
      "quality": 0.9972,
      "id": "1"


How can we hit the API at scale?

⚠️ Do not just send a million lines at once. 😊

If you want to send thousands or millions of lines programmatically, you should stream by sending batch requests of 30 rows sequentially. You can send up to 3 requests in parallel on a cold start.

ModelFront autoscales under load. So the throughput increases after a few minutes.

If you send too many requests, the API will respond with status code 429 - "Slow down!". We generally recommend a retry with exponential backoff.

🤔 If you just want to run evaluations large files, you can just the console - no coding! You can run files with tens of millions of lines - with just a few clicks. 💻

How can we hit the API at speed?

The response time is similar to a machine translation API.

If you want to reduce latency, you should probably still send batch requests. You can reduce latency by sending requests from near or within Google datacenters in North America and Western Europe.

If you require dedicated GPUs, large batches, accelerated autoscaling or on-prem models, contact us.

More questions?

Read our general documentation, or shoot us an email at [email protected]

© ModelFront Inc.