Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Flaws in Jan AI Expose Systems to Remote Manipulation

Flaws in Jan AI Expose Systems to Remote Manipulation Flaws in Jan AI Expose Systems to Remote Manipulation
IMAGE CREDITS: WE ARE FOCUS

Multiple security vulnerabilities have been discovered in Jan AI, an open-source alternative to ChatGPT, potentially allowing remote, unauthenticated attackers to manipulate systems, according to security researchers at Snyk. These flaws make Jan AI systems susceptible to exploitation via malicious webpages, putting user data and system integrity at risk.

Jan AI, developed by Menlo Research, is designed as a personal assistant that can operate offline on desktop and mobile devices. The platform supports various Large Language Models (LLMs) and allows users to customize their experience by downloading models locally or importing them from external repositories like HuggingFace. The system is powered by Menlo’s self-hosted AI engine, Cortex.cpp, which functions as the backend server, interacting with user interfaces built on the Electron framework.

The open-source software, which boasts over a million downloads on GitHub, offers users full control over the AI’s operation, as it eliminates the need for cloud hosting services. However, because Jan and Cortex are intended to run locally, they lack proper authentication mechanisms, leaving them vulnerable to remote attacks from malicious sources.

Key Vulnerabilities Identified in Jan AI

Snyk’s security audit uncovered several vulnerabilities within Jan AI that could be exploited by attackers:

  1. Arbitrary File Write: A function in Jan AI for uploading files to its server was found to be improperly sanitized. This flaw could allow an attacker to inject malicious files into the system through a compromised webpage.
  2. Out-of-Bounds Read: The GGUF parser in Jan AI exhibited out-of-bounds issues, which could be leveraged by attackers to read data beyond the intended boundaries. This presents a risk of unauthorized data access.
  3. Missing CSRF Protection: Jan AI’s backend server lacked protection against Cross-Site Request Forgery (CSRF) attacks on non-GET endpoints. Although the system has Cross-Origin Resource Sharing (CORS) implemented, it is vulnerable to manipulation via crafted requests.
  4. Remote Code Execution (RCE): The most critical vulnerability identified was remote code execution (RCE) through Cortex.cpp’s support for the Python engine. Attackers can exploit this flaw by modifying model configurations to inject malicious payloads, which can be executed when the model is loaded.

Exploitation of Vulnerabilities

By exploiting the cross-origin file write vulnerability, attackers can inject a crafted GGUF file to the server. This could trigger an out-of-bounds read and allow them to access data in a metadata field they control. Further manipulation could disable CORS protections, giving attackers greater control over the system and allowing them to exfiltrate sensitive data.

While the GGUF file flaw provides an interesting method of leaking data, the researchers pointed out some limitations. Attackers cannot control what data is exposed when the crafted model file is loaded, making it uncertain whether sensitive information would be disclosed.

The RCE vulnerability, caused by the ability to inject Python commands, is a particularly serious risk. An attacker could execute arbitrary commands on the affected system by altering the model’s configuration and injecting payloads into the Python binary used by Cortex.cpp.

Fixes and Response

Snyk reported the vulnerabilities to Menlo Research on February 18, and all issues were addressed by March 6. In total, four Common Vulnerabilities and Exposures (CVEs) were assigned to these flaws:

  • CVE-2025-2446: Arbitrary file write via path traversal
  • CVE-2025-2439: Out-of-bounds read in GGUF parser
  • CVE-2025-2445: Command injection in Python engine model update
  • CVE-2025-2447: Missing CSRF protection

These vulnerabilities have been patched, but users of Jan AI should ensure they have updated their systems to mitigate the risk of exploitation.

Conclusion

While Jan AI offers an impressive offline, self-hosted alternative to cloud-based AI systems, its lack of authentication and security protections made it vulnerable to remote manipulation. With the vulnerabilities now patched, it is important for users to stay vigilant and ensure that their systems are updated to avoid potential threats.

Share with others