[ NOW IN PUBLIC BETA ]

One API key. One command. Any GPU.

npm install trainfabric · paste your key · npm train
trainfabric
[ PRICING & MODES ]

Three modes. One command.

Training shouldn't require a DevOps team. These are the things that actually matter.

One API key. That's the infra.

No AWS credentials. No Docker. No Kubernetes. Authenticate once and every job runs on our GPUs. The complexity is our problem.

Byte-level allocation.

We don't round up to the nearest GB. Your job gets exactly the memory and compute it needs, measured to the byte. You pay for what you use, not what you don't.

E B P

Three tiers. One variable.

Efficient, Balanced, or Performance. Each tier changes one variable in the allocation algorithm. Conservative saves money. Aggressive saves time. You pick.

What you used to write. What you write now.

One line of these columns solves what the other takes 30 lines to do.

// BEFORE
FROM nvidia/cuda:12.2-devel-ubuntu22.04 RUN apt-get update && apt-get install -y \ python3.10 python3-pip git curl RUN pip install torch==2.1 transformers boto3 \ datasets accelerate wandb COPY train.py /workspace/ COPY requirements.txt /workspace/ WORKDIR /workspace # Configure AWS credentials export AWS_ACCESS_KEY_ID=$KEY export AWS_SECRET_ACCESS_KEY=$SECRET export AWS_DEFAULT_REGION=us-west-2 # Upload dataset to S3 aws s3 cp ./dataset.csv s3://my-bucket/data/ # Request a GPU instance aws ec2 run-instances \ --instance-type p4d.24xlarge \ --image-id ami-0abcdef1234567890 \ --key-name my-ssh-key \ --security-group-ids sg-xxx \ --subnet-id subnet-xxx # Wait for instance to boot (~3 minutes) # SSH in, install deps, run training... # Monitor logs, handle failures, retry on spot # Download artifacts when complete # ...and 20 more lines for error handling
// AFTER
$ npm install -g trainfabric $ trainfabric train ./train.py --data ./dataset.csv Model saved to ./output/model.pt

Everything from your terminal.

Four commands cover the entire workflow. Keyboard-first, dashboard-optional.

$ trainfabric train ./train.py --data ./dataset --tier balanced --epochs 10

Start training in 2 minutes.

Free tier includes 10 GPU-hours per month.

Create Free Account
No credit card required.

Pay per GPU-hour. To the byte.

No upfront commitments. No reserved instances. Start and stop anytime.

Efficient
Conservative
$0.50/GPU-hr

Minimum viable resources. Ideal for overnight jobs, hyperparameter sweeps, and experimentation where time isn't the constraint.

Balanced
MOST POPULAR
Optimized
$1.20/GPU-hr

The sweet spot. Enough resources to train at speed without burning budget. Most teams live here.

Performance
Aggressive
$2.80/GPU-hr

Maximum allocation. For deadline-critical training, large models, and production fine-tuning where every hour matters.

All tiers include: real-time logs, analytics dashboard, model artifact storage, and CLI + dashboard access.

[ HOW IT WORKS ]

Three steps. That's it.

From install to trained model in under 2 minutes.

01
Install

npm install trainfabric and paste your API key.

02
Run

Type trainfabric train and your model goes live on a GPU.

03
Ship

Your trained model streams back to your laptop.

Under the hood: our allocator picks the best GPUs across 16 suppliers in real time.