Skip to content

Configure an AI coding provider

AI coding provider setup happens on the worker side. That keeps provider readiness, credentials, and execution close to the environment where code is checked out and validated.

For the first real test, use the worker UI provider page.

This is in the local worker UI

Install the worker first, then open http://127.0.0.1:8010/. If the worker runs on a VPS, create an SSH tunnel to port 8010 before opening the worker UI.

MergeLoom local worker UI provider setup page with provider cards and readiness state.
Local worker UI: configure provider cards and check readiness from the worker gateway, not from the customer controller.

MergeLoom supports two provider configuration modes.

ModeUse it when
uiYou want workspace operators to configure providers through the worker UI. This is the easiest first-run path.
provisionedYou want provider settings supplied by environment, Kubernetes Secret values, Helm values, or another platform-controlled process.

Docker Compose defaults to UI-managed provider config:

Terminal window
JCA_PROVIDER_CONFIG_MODE=ui

The Helm chart runs the gateway in UI-managed provider mode by default. Executors read provider config from the gateway:

gateway:
JCA_PROVIDER_CONFIG_MODE: ui
executors:
JCA_PROVIDER_CONFIG_MODE: gateway

For production Kubernetes installs, put provider API keys in a Kubernetes Secret and pass the Secret name through secret.existingSecretName.

Codex CLI is the easiest first real test path in the current product.

Use:

  • backend type: cli
  • provider: codex-cli
  • auth mode: local CLI login

Recommended path:

  1. Start the worker.
  2. Open the local worker UI at http://127.0.0.1:8010/.
  3. Open the provider setup page.
  4. Choose the codex-cli provider card.
  5. Click Authenticate in browser.
  6. Complete the device-auth flow.
  7. Click Check login status.

Shell fallback:

Terminal window
docker compose exec worker-gateway codex login --device-auth
MergeLoom local worker UI provider card after provider setup and readiness checks.
Local worker UI: after authentication, use the provider card to confirm the worker can call the selected AI coding provider.

Claude Code CLI follows the same worker-local pattern.

Recommended path:

  1. Start the worker.
  2. Open the worker provider page.
  3. Choose the claude-code-cli provider card.
  4. Click Authenticate in browser.
  5. Complete the browser sign-in flow.
  6. Click Check login status.

Shell fallback:

Terminal window
docker compose exec worker-gateway claude auth login

Use this path for private or self-hosted OpenAI-style chat completions endpoints.

Required values:

FieldMeaning
Base URLEndpoint root that exposes /chat/completions, for example http://local-model:8000/v1.
Default modelModel or deployment name sent to the endpoint.
Auth modenone for trusted local endpoints, bearer for endpoints requiring a bearer token.
API keyRequired when auth mode is bearer.

The self-test sends a forced function call. It must pass before you use the endpoint for real tickets.

MergeLoom requires real tool-calling support because the worker needs to inspect files, edit files, run allowed commands, and report final results.

Use Vertex AI when the worker should call Gemini or another Vertex publisher model from your Google Cloud environment.

Recommended auth modes:

Auth modeBest forNotes
Service account JSONSimple first setupStore it in the worker UI only for tests. In production, use a Kubernetes Secret or Docker secret.
ADC / Workload IdentityGKE or Google-hosted workersPreferred for Kubernetes because the pod can use workload identity without a long-lived JSON key.
Bearer access tokenTemporary testingTokens expire, so this is not a production path.

For standard Gemini/Vertex models, choose Publisher model path and enter:

  • project ID
  • location, such as global
  • publisher, usually google
  • model, such as gemini-2.5-pro

Use Raw endpoint URL only for custom or advanced Vertex endpoints.

For GKE Workload Identity Federation, configure the pod identity outside MergeLoom, then set the worker provider auth method to ADC / Workload Identity. No service account JSON is needed. See Google Cloud docs for Vertex AI authentication and GKE Workload Identity Federation.

Google also documents that service account keys need careful handling; see best practices for managing service account keys.

Recommended auth modes:

Auth modeBest forNotes
AWS default credential chain / IAM roleProduction, EKS IRSA, EC2/ECS roles, mounted AWS configPreferred. The worker asks the AWS SDK credential chain for temporary credentials.
Static access keysQuick tests or locked-down temporary keysStore in a Secret, not plain Helm values, for production.
AWS profileMounted ~/.aws/config or ~/.aws/credentialsUseful for platform teams that already manage profile files.

Required values are region and model ID. For production Kubernetes installs, use an IAM role through EKS IRSA or another workload identity mechanism rather than long-lived access keys.

AWS references:

Recommended auth modes:

Auth modeBest forNotes
API keySimple first setupStore in a Secret for production.
Entra service principalNon-AKS automation where a client secret is acceptableRequires tenant ID, client ID, and client secret.
Managed identityAzure-hosted workersUse a system-assigned or user-assigned managed identity.
Azure workload identityAKS production installsPreferred on AKS because Kubernetes can project a federated token to the pod.
Bearer access tokenTemporary testingTokens expire, so this is not a production path.

For Azure workload identity, configure AKS and the federated identity credential outside MergeLoom, then set:

  • JCA_AZURE_FOUNDRY_AUTH_METHOD=workload_identity
  • AZURE_TENANT_ID
  • AZURE_CLIENT_ID
  • AZURE_FEDERATED_TOKEN_FILE, if your platform does not inject it

The token scope used by the worker is https://cognitiveservices.azure.com/.default, matching Microsoft Foundry guidance. See Microsoft docs for Foundry authentication and authorization and AKS Workload Identity.

The Helm chart can consume an existing Kubernetes Secret with envFrom.

Example:

Terminal window
kubectl create namespace mergeloom --dry-run=client -o yaml | kubectl apply -f -
kubectl create secret generic mergeloom-worker-env \
--namespace mergeloom \
--from-literal=JCA_WORKER_ENROLLMENT_TOKEN="worker-enrollment-token" \
--from-literal=JCA_OPENAI_API_KEY="openai-api-key" \
--from-literal=JCA_ANTHROPIC_API_KEY="anthropic-api-key"
helm upgrade --install mergeloom-worker oci://registry-1.docker.io/mergeloom/mergeloom-worker \
--version 1.0.1 \
--namespace mergeloom \
--create-namespace \
--set worker.controlPlaneUrl="https://controller.mergeloom.ai" \
--set worker.tenantSlug="customer-slug" \
--set secret.existingSecretName="mergeloom-worker-env"

Supported sensitive Secret keys include:

  • JCA_WORKER_ENROLLMENT_TOKEN
  • JCA_WORKER_CLUSTER_TOKEN
  • JCA_OPENAI_API_KEY
  • JCA_ANTHROPIC_API_KEY
  • JCA_VERTEX_SERVICE_ACCOUNT_JSON
  • JCA_VERTEX_ACCESS_TOKEN
  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_SESSION_TOKEN
  • JCA_AZURE_FOUNDRY_API_KEY
  • AZURE_CLIENT_SECRET
  • JCA_AZURE_FOUNDRY_BEARER_TOKEN

Non-sensitive model defaults can still be supplied through Helm values such as providerEnv.openaiModel, providerEnv.anthropicModel, providerEnv.vertexModel, providerEnv.bedrockModelId, and providerEnv.azureFoundryModel.

For Kubernetes installs, the recommended production pattern is:

  • use secret.existingSecretName for enrollment tokens and static provider secrets
  • use workload identity or IAM roles instead of static cloud keys where available
  • use serviceAccount.annotations, podLabels, and podAnnotations in the Helm chart to attach the pod identity your cloud platform expects

When a job runs, provider and model selection is resolved in this order:

  1. ticket or issue directive provider=...
  2. ticket or issue directive model=...
  3. repository workflow provider/model defaults
  4. tenant default provider and optional tenant-default model from the worker setup page
  5. worker-stored provider default model/profile

Before running a real job:

  • provider login or API key is valid
  • provider readiness check passes
  • selected model supports the required tool-calling behavior
  • worker can reach the provider from inside its container or pod
  • your allowed commands and validation commands match the repository