Regarding the use of AI in software development right now, everybody is using it, but nobody seems to really know what they are getting for their money.
One company has a solution for that. Jellyfish, which provides a software engineering intelligence platform, today launched new features for its Jellyfish AI Impact platform that deliver end-to-end visibility of the impact of AI on productivity, quality and value across the software development life cycle (SDLC).
The platform provides real data about which AI tools actually work and whether they’re worth the cost.
Finally, Some Actual Data
So, Jellyfish decided to solve this with their AI Impact platform, which enables users to do away with guesswork and actually see the data.
The new features for Jellyfish AI Impact — which now supports Anthropic’s Claude Code and Windsurf in addition to GitHub Copilot, Cursor, Gemini and Amazon Q, include:
- Multitool Comparison: It pulls together all your AI tools in one place so you can compare them side by side. Want to know if Claude Code is better than Copilot for your specific use cases? Now you can find out instead of going on gut feeling. Jellyfish pulls data on tool adoption, cost and impact into one consolidated view so users can benchmark multiple AI tools, identify the highest-value tools for specific use cases and build the most effective AI tool stack.
- Code Review Agent Dashboard: With a growing number of companies using code review tools, Jellyfish allows teams to measure the impact of AI code review agents like CodeRabbit, Graphite and Greptile across the full SDLC.
- Dynamic AI Tool Spend Dashboards: With Jellyfish’s real-time, usage-based spend tracking at both the team and project level, companies can now tie their spending directly to outcomes to determine if the investment was worth it for a specific initiative.
As Jellyfish co-founder and CEO Andrew Lau told The New Stack, his company is “giving customers the ability to tie granular AI spend to delivery impact, helping engineering and finance leaders better understand what that investment is worth at an individual and/or project level — all at a time when AI costs vary dramatically with little understanding or visibility into why.”
Why This Matters
The smart move Jellyfish made here is staying vendor-neutral. They’re not trying to sell you on specific AI tools — they’re just helping you figure out which ones are working for your team.
“As the industry continues into the agentic era, we give you the insight you need to optimize today’s tools and prepare for what’s next,” Lau explains. Translation: The AI tool landscape is going to keep changing fast, so you need a system that can adapt.
This is likely just the beginning. AI agents are getting more sophisticated, pricing models keep evolving and more companies are going to demand proof that their AI investments are worthwhile.
That’s why this kind of solution makes a lot of sense. According to Jellyfish’s 2025 State of Engineering Management report, 90% of engineering teams are now using AI coding tools. That’s up from 61% just last year. But many engineering teams are still flying blind with AI, lacking the data necessary to build the most effective AI tool stack.
Many companies are basically throwing money at AI tools and hoping for the best. They’ve got GitHub Copilot here, Claude Code there, maybe some Cursor thrown in, and they’re paying for all of it without really understanding which tools are pulling their weight.
The Problem Nobody Wants To Talk About
“AI adoption is accelerating, costs are shifting and we’re all under pressure to make it work — both from a code, product, business value and adoption standpoint,” Lau explains. “There are so many tools available now that are doing innovative, but different things — often competing for mindshare and coexisting within the same engineering org.”
Jellyfish AI Impact combines adoption metrics, dynamic value tracking and delivery outcome data across all AI-powered tools — from coding assistants to code review agents — for a clear, comprehensive view of AI’s role in software delivery, the company said. With these holistic insights, engineering leaders can drive smarter investments and better delivery outcomes across the SDLC.
“Leaders need ways to disambiguate the noise to understand which tools and agents work for which teams, codebases and projects, and why,” Lau added. That’s not going to fly much longer, especially as these tools start costing real money and executives want to see real returns.
The Bottom Line
Companies like DraftKings and Keller Williams are already using Jellyfish to get smarter about their AI spending, the company said. As the market matures, having actual data about what’s working (and what isn’t) is going to separate the companies that use AI effectively from the ones that just use AI expensively.
The new Jellyfish features are available now.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.