Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

RLWRLD Secures $14.8M to Advance AI Models for Robotics

RLWRLD Secures $14.8M to Advance AI Models for Robotics RLWRLD Secures $14.8M to Advance AI Models for Robotics
IMAGE CREDITS: RLWRLD

As automation accelerates globally, South Korean startup RLWRLD is stepping out of stealth mode with $14.8 million in seed funding to build foundational AI models for robotics. The funding round, led by Hashed, also saw participation from Mirae Asset Venture Investment, Global Brain, and several strategic backers across Japan, Korea, and India.

RLWRLD’s core ambition is to solve a common robotics pain point: while robots perform repetitive tasks well, they often struggle with delicate, dynamic, or human-like actions. RLWRLD is tackling this by merging large language models (LLMs) with traditional robotics software to build an AI system capable of fast, agile movement and logical decision-making.

“Processes that currently require manual labor can be fully automated by capturing and replicating human expertise,” said Jung-Hee Ryu, RLWRLD’s founder and CEO.

With more than 540,000 new industrial robots installed worldwide in 2023, the automation race is heating up. Yet many sectors still rely on human workers due to limitations in robot dexterity and adaptability. RLWRLD aims to bridge this gap using its purpose-built AI foundation model for robotics.

The company plans to use the new funds to:

  • Launch proof-of-concept (PoC) projects with its strategic investors
  • Purchase robots, GPUs, and data collection devices
  • Hire top-tier robotics and AI researchers
  • Develop advanced five-fingered hand movements, a capability still lacking among competitors like Tesla, Figure AI, and 1X

RLWRLD has attracted support from notable companies such as LG Electronics, SK Telecom, Amber Manufacturing, KDDI, Mitsui Chemical, and others. These investors will not only contribute capital but also provide real-world environments for data collection and early deployments.

The startup’s goal is to create a platform that can support diverse robot types — from industrial arms and collaborative bots to autonomous mobile robots and humanoids. A humanoid robot demonstration powered by RLWRLD’s model is scheduled for later this year.

RLWRLD was founded in 2024 by Jung-Hee Ryu, a serial entrepreneur known for Olaworks (acquired by Intel) and startup accelerator Future Play. His decision to start RLWRLD stemmed from a gap in advanced AI ventures in Korea and Japan, despite both countries’ manufacturing strength.

To build RLWRLD, Ryu recruited six professors from Korea’s leading universities — KAIST, SNU, and POSTECH — along with their research teams. This brain trust now powers the development of RLWRLD’s foundational robotics model.

RLWRLD is not alone in the race. Startups like Skild AI and Physical Intelligence are also building foundational models for robotics, alongside tech giants like Nvidia and Google DeepMind. But Ryu believes RLWRLD’s edge lies in its early access to high-degree-of-freedom (DoF) robots and a robust team of AI and robotics specialists.

“Other companies use low-DoF robots like two-fingered grippers,” Ryu noted. “We’ve secured high-DoF robots, allowing for more complex and human-like tasks.”

Thanks to its proximity to manufacturing hubs in Korea and Japan, RLWRLD can also collect valuable data quickly — a key factor in training effective robotics models. According to a 2024 report, the two countries collectively accounted for 9.2% of global manufacturing output.

RLWRLD plans to begin generating revenue this year through PoC trials and real-world deployments. In the long term, the company wants to serve factories, logistics hubs, retail stores, and eventually domestic settings — like robots for household chores.

With a lean team of 13 employees, RLWRLD is positioning itself as a serious contender in the future of intelligent robotics.

Share with others