Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Inephany Raises $2.2M to Make AI Training More Efficient

Inephany Raises $2.2M to Make AI Training More Efficient Inephany Raises $2.2M to Make AI Training More Efficient
IMAGE CREDITS: INEPHANY

As the race to build bigger, smarter AI heats up, the cost of training these models is becoming one of the industry’s most pressing challenges. Inephany, a London-based startup, has just raised $2.2 million in pre-seed funding to tackle this head-on. Led by Amadeus Capital Partners and joined by Sure Valley Ventures and AI pioneer Professor Steve Young, Inephany is setting out to transform how large AI models are trained—making the process drastically more efficient and far less expensive.

This early funding will fuel three core initiatives: expanding its expert engineering and research team, pushing the boundaries of its intelligent AI optimisation platform, and onboarding its first enterprise customers. At the heart of its mission lies a bold ambition—to make AI development smarter, not just more powerful.

Solving AI’s Costliest Bottleneck

AI’s progress has been staggering—but so has its appetite for compute. Since 2012, the computational resources needed for AI have doubled roughly every three months. Training a model like GPT-4 is now estimated to cost upwards of $100 million. With next-gen systems potentially hitting the billion-dollar mark, current training methods are becoming unsustainable.

Inephany’s solution is a real-time AI optimisation engine that guides neural network training dynamically, cutting unnecessary cycles and saving enormous resources. The company claims it can reduce training costs by at least 10x—a game-changer for any organisation working with Large Language Models or similar architectures.

The Team Rewriting the Rules of AI Training

Founded in 2024, Inephany is the brainchild of Dr. John Torr—formerly of Apple Siri—alongside Hami Bahraynian and Maurice von Sturm, co-founders of conversational AI venture Wluper. Their collective experience spans speech AI, neural architectures, and commercial deployments.

Their motivation? Frustration. Having worked deep in the AI trenches, the founders saw firsthand how inefficient and compute-heavy AI development had become. Rather than throwing more hardware at the problem, they envisioned a better way—an intelligent optimisation layer that makes training faster, cheaper, and more environmentally sustainable.

A Smarter Way to Optimise Neural Networks

Where most AI training relies on endless trial and error, Inephany’s approach is different. Its system enhances how efficiently models learn, reduces iteration time, and cuts energy costs—without compromising on performance. This not only benefits LLMs but also holds promise for other neural architectures, such as CNNs for self-driving tech or RNNs used in financial forecasting.

Looking ahead, the company plans to expand its optimisation system to cover inference time as well, unlocking full-lifecycle efficiency across AI deployments.

Why This Matters for the Future of AI

With AI innovation now tethered to massive compute budgets, Inephany’s platform could dramatically open up access. By lowering the barrier to entry, more startups, researchers, and enterprises can experiment, iterate, and scale without needing a billion-dollar budget.

Backers like Amadeus Capital and Professor Steve Young see Inephany’s tech as a foundational leap forward. If it scales as expected, this approach could redefine what’s possible for AI—and who gets to build it.

Share with others