Configuration
lin is designed to work with minimal configuration by auto-detecting your setup. However, you can customize every aspect of its behavior.
Config File
Section titled “Config File”lin uses unconfig to find and load your configuration files. You can use one of the following:
lin.config.ts(or.js,.mjs, etc.).linrc(without extension, or.json)linproperty inpackage.json
i18n Config
Section titled “i18n Config”If your framework is not auto-detected, you can put your i18n config inside your lin config, or create a separate i18n.config.ts file.
Add an i18n object to your main lin.config.ts file.
import { defineConfig } from '@rttnd/lin'
export default defineConfig({ i18n: { locales: ['en-US', 'es-ES'], defaultLocale: 'en-US', directory: 'locales', }, // ... other lin config})If you don’t plan to use other lin config, just create a i18n.config.ts file.
import { defineI18nConfig } from '@rttnd/lin'
export default defineI18nConfig({ locales: ['en-US', 'es-ES'], defaultLocale: 'en-US', directory: 'locales',})Adapters
Section titled “Adapters”lin supports multiple adapters to read and write translations. By default, it uses the json adapter.
export default defineConfig({ adapters: { json: { directory: 'locales', }, markdown: { files: ['content/**/*.md'], } }})JSON Adapter
Section titled “JSON Adapter”The default adapter. It reads and writes JSON files.
directory: The directory where your locale files are stored. Defaults tolocales.sort: How to sort the keys in the JSON file.'abc': Alphabetical order.'def': Same order as the default locale.
Markdown Adapter
Section titled “Markdown Adapter”Used to translate Markdown files.
files: An array of glob patterns to find Markdown files.localesDir: Where to store the translated Markdown files. Defaults to the same directory as the source file, but with the locale code appended (e.g.,file.es.md).output: The output format.
LLM Config
Section titled “LLM Config”You need to specify the model and the provider in your configuration or via the --model (-m) and --provider (-p) CLI flags.
Make sure the corresponding API key is set in your env variables (e.g., OPENAI_API_KEY).
Example lin.config.ts with LLM options:
import { defineConfig } from '@rttnd/lin'
export default defineConfig({ options: { provider: 'openai', model: 'gpt-4.1-mini', temperature: 0, }})Use lin models to see all available models.
Available providers: openai, anthropic, google, xai, mistral, groq, cerebras, azure, openrouter.
Note that lin can work with models not listed here, just pass the model name as a string.
To add a new provider, consider opening a PR. It’s easy to do!
All properties under options are passed to the Vercel AI SDK.
Presets
Section titled “Presets”To save LLM options, you can define and name different model configurations in your lin.config.ts file.
export default defineConfig({ options: { provider: 'openai', model: 'gpt-4.1-mini', temperature: 0.2, }, presets: { 'creative-claude': { provider: 'anthropic', model: 'claude-sonnet-4-0', temperature: 0.6, context: 'Use a playful tone.' }, 'fast-deepseek': { provider: 'groq', model: 'deepseek-r1-distill-llama-70b', }, }})You can then activate a preset using the --model flag. Any other CLI flags will override the preset’s values.
# Use the 'creative-claude' presetlin sync -m creative-claude
# Use the 'fast-deepseek' preset, but override the temperaturelin add ui.new.feature A new feature -m fast-deepseek -t 0Context
Section titled “Context”This simple string is directly added to the system prompt. Use it to provide extra information about your project.
export default defineConfig({ context: 'Translate everything in a formal and polite tone.',})Batching
Section titled “Batching”lin provides several options to control how translation requests are batched, which is crucial for handling large projects and managing LLM context limits.
Locale Batching (batchSize)
Section titled “Locale Batching (batchSize)”The batchSize option controls how many locales are processed together in a single translate or sync operation. This is useful when you have a large number of languages. The default is 10.
You can set this in your lin.config.ts or use the --batchSize (or -b) flag.
Key-based Batching (keyBatchSize & charLimit)
Section titled “Key-based Batching (keyBatchSize & charLimit)”Within each locale, lin further batches the keys sent to the LLM. A new batch is created when either of these limits is reached:
keyBatchSize: The maximum number of keys in a batch. (Default:50)charLimit: The maximum number of characters of all values in a batch. This acts as a proxy for token limits. (Default:4000)
These can be configured in lin.config.ts or with the --kbs and --cl flags.
with (Context Profiles)
Section titled “with (Context Profiles)”The with option allows you to control which locale files are included in the LLM’s context window. This can significantly improve translation quality by providing the model with more context about your project’s wording and style.
none(default): Only the keys to be translated are sent to the LLM.def: Includes the entire default locale JSON file.tgt: Includes the full JSON of each locale currently being translated.both: Includes both the default locale file and the target locale files.all: Includes every locale JSON file in the context. This may be expensive.<locale>: You can also provide one or more specific locale codes (e.g.,es-ES,fr).
export default defineConfig({ with: 'tgt',})# Override config and use 'both' profilelin sync -w both
# Provide specific locales for contextlin add ui.new.key New Key -w es-ES -w fr
# Force no additional contextlin sync -w noneParser Config
Section titled “Parser Config”You can configure the key parser to look for keys in different files or use custom lexers.
This is used by the check and translate commands.
export default defineConfig({ parser: { input: ['src/**/*.{js,jsx,ts,tsx,vue,svelte,astro}'], // ... other i18next-parser options }})