SAN FRANCISCO, Sept. 9, 2024 /PRNewswire/ -- On 3rd September, Gru.ai ranked first with a high score of 45.2% in the latest data released by SWE-Bench Verified Evaluation, the authoritative standard for AI model evaluation. The SWE-Bench Verified, a reliable evaluation of AI models' ability to solve real-world software issues, was a Benchmark of collaboration between OpenAI and SWE.
Bug Fix Gru, one of four agents provided by Gru, participated in the SWE-Bench Verified evaluation. According to Gru's team blog, providing Bug Fix Gru with a comprehensive operating environment and a wealth of development tools laid the foundation for their high score. Enhancements in workflow, multimodal support, and the addition of Rag capabilities effectively boosted the score. Notably, the Gru team emphasized that they have an evaluation process in place to assess the impact of any changes.
Gru.ai, a company that builds AI developers, provides four types of software engineering agents:
Gru.ai previously secured a $5.5 million angel investment. Alongside Gru, several other firms in the sector, including Devin, Factory, Cosine.sh and Codium.ai, have also announced their funding details. As large-scale model capabilities mature, the coding agent field is experiencing a surge of investment and innovation, indicating a bright future for this evolving industry.