Pelora is building infrastructure that creates task-specific model updates directly from data and task descriptions — turning expensive model training loops into fast, scalable model generation.
Organizations increasingly need models adapted to their own data, domains, and workflows. But today, creating a specialized model usually requires collecting data, running fine-tuning jobs, evaluating results, and repeating the process — often across multiple expensive iterations.
Fine-tuning and retraining require GPUs, infrastructure, and repeated experiments before a useful model is ready.
Every new task, customer, dataset, or domain requires another training cycle — slowing down product development and deployment.
Building reliable custom models requires deep ML expertise, careful evaluation, and non-trivial engineering.
Instead of training a new model for every task, Pelora learns how models should adapt. Our system creates compact model updates that can be applied directly to any base model, such that, training becomes dramatically cheaper.
Pelora receives a small task-specific dataset, examples, constraints, or a natural-language task description.
The system generates task-specific adaptation weights instead of running a conventional training loop.
The generated update is applied to a base model and can be evaluated, deployed, or iterated on rapidly.
Pelora is based on a new architecture for generating compact model updates. The system learns from trained adapters and their corresponding tasks, then uses that knowledge to produce new task-specific updates on demand.
The system uses both semantically encoded examples and task descriptions to understand what adaptation is needed.
Instead of optimizing weights from scratch, Pelora generates structured updates that adapt a base model to a new task.
Generated updates can be evaluated on real tasks, allowing the system to improve toward practical model performance.
Generate models adapted to clinical notes, biomedical literature, genomic data, or specialized healthcare workflows.
Create models specialized for internal documents, customer support, compliance, legal workflows, or proprietary knowledge.
Accelerate experimentation by replacing repeated fine-tuning runs with fast generated model updates.
Pelora is founded by Ben Shapira and Roi Cohen. Our work spans language models, model reliability, biomedical AI, adversarial robustness, and production ML systems. We combine frontier AI research with practical experience building and evaluating real-world AI systems.
M.Sc. in Computer Science from Tel Aviv University. Research Scientist at IBM Research working on biomedical and multimodal AI. Previously built production ML, NLP, and LLM systems at AlgoSec. Former intelligence analyst at IDF Unit 8200.
PhD candidate in AI at HPI & TU Berlin, researching LLM reliability, factuality, and model behavior. Former IBM Research and Microsoft Research intern. Published at NeurIPS, EMNLP, TACL, and EACL. Former researcher at IDF Unit 8200.
We are speaking with AI teams, research labs, and companies that repeatedly train or customize models. If this sounds like you, we’d love to talk.
Contact Pelora