By implementing a combination of AWS services—including Amazon Bedrock for accessing powerful foundation models, Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for scalable application container execution, and Amazon Aurora as a highly available database—the company can focus on driving its core innovations without having to manage the underlying infrastructure.
The scalability and reliability of the AWS Cloud are crucial for the platform’s availability and performance. For Leapter’s customers, this translates into a high-performing, always-available platform capable of processing complex logic visualizations in real-time without delays. The dependability of AWS infrastructure provides Leapter’s customers with the confidence that their business-critical development processes are built on a stable foundation.
The “Glass Box Problem”
AI assistants can handle a wide variety of tasks, from summarizing documents to creating images and videos, or analyzing and representing business data. They can also help software developers write program code to develop new products or services. But how can developers using these code generation tools ensure that the resulting application code is error-free and secure? This fundamental challenge is precisely what Leapter, a German startup, is addressing. Leapter recently successfully closed a pre-seed funding round of 2 million euros, led by bm-t beteiligungsmanagement thüringen GmbH with participation from SIVentures.
The company, which builds its platform on AWS, has developed a solution that visualizes AI-generated code, making it transparent and comprehensible – effectively transforming the proverbial “black box” into what they call a “glass box.”
The trust problem with AI systems
For businesses, errors in generated source code can have severe consequences—especially when business-critical applications are affected. “We’ve reached a point where AI is transforming nearly every industry. However, for the teams developing these AI applications, it often functions like a black box,” explains Oliver Welte, co-founder of Leapter. “While developers can see what goes in and what comes out, they can’t see how the AI arrives at its conclusions, such as the generated code. This lack of transparency creates significant challenges, particularly when AI systems occasionally ‘hallucinate’—generating code that appears plausible but is incorrect.”
Consider an insurance company whose AI creates code for calculating annual insurance premiums. A subtle logic error—for instance, if the AI misinterprets a rule like “customers with more than 5 years of contract duration receive a discount” as ‘years > 5’ instead of ‘years >= 5’—could have serious consequences: thousands of customers in their fifth contract year would be wrongly excluded from discounts, potentially leading to massive customer dissatisfaction and costly legal consequences.
The real problem isn’t that AI might make mistakes, but that it’s too difficult for humans to reliably find these errors in the code jungle. Therefore, it’s crucial that developers and domain experts can easily understand and validate the business logic generated as code by AI to take full responsibility for the application.
Only then can it be ensured that the app functions correctly and can be safely used by users. So it’s not about AI making mistakes – all systems do that. Rather, it’s about developers needing to understand exactly what happens in AI systems, why errors occur, and how to fix them.
How lack of transparency slows innovation
Think about our favorite apps – the ones we use for shopping, finances, communication, or work. Each was developed by teams that conceived their functions and wrote thousands or even millions of lines of code. Today, these teams increasingly use AI to develop applications faster. But there’s a catch: if the team doesn’t understand what the AI-generated code does or how it works, they’ll hardly trust it enough to use it in the final product.
“It’s like having a brilliant but enigmatic colleague who hands over completed work without explaining their approach,” says Robert Werner, Leapter’s second co-founder, who brings over a decade of experience from industry giants like Pivotal and VMware. “While you appreciate the help, you’ll likely spend considerable time trying to understand the work before you have enough confidence to use it.”
This trust gap affects entire organizations developing AI-supported applications. Product managers can only verify with great effort whether the development meets customer needs. The verification process costs time and ties up valuable company resources, negating the positive effects of AI usage. And in regulated industries like healthcare or finance, this lack of transparency can make AI-supported development nearly impossible. “Without transparency, teams end up spending more time reviewing and understanding AI-generated code than they would have spent writing it themselves,” Werner explains.
Solutions for more transparency in AI usage
Instead of just producing raw code, Leapter’s platform generates visualizations that show exactly what the code does and why. “We call them ‘executable models’,” explains Welte. “They’re visualized workflows that show how data flows through a system, where decisions are made, and how different components interact.”
These models serve as a common language for all software development stakeholders:
- For developers: Understanding the logic behind AI-generated code without time-consuming reverse engineering
- For business teams: Verifying system behavior without technical expertise
- For organizations: Ensuring governance and compliance through transparent, verifiable AI systems
A practical example: A team develops a recommendation system using AI. With Leapter’s platform, they can precisely visualize how the AI-generated code processes user preferences, analyzes product attributes, and generates recommendations. If the recommendations appear inappropriate, the decision path can be traced back to identify where the logic needs adjustment.
AWS is also actively addressing the challenge of AI transparency. With Amazon Bedrock Guardrails, AWS offers a solution that makes AI applications safer and more traceable. Amazon Bedrock Guardrails can detect up to 88 percent of potentially harmful content and filter out over 75 percent of hallucinations in AI responses. Through the use of “Automated Reasoning” – which, as the first and only AI protection feature, uses mathematical methods – factual errors can be prevented and the correctness of answers can be verifiably proven.
The use of different tools and technologies can help companies make AI systems more understandable, controllable, and thus more trustworthy.
For consumers, this means that the applications we use daily can be developed more efficiently and with fewer errors. New features are delivered faster, errors are detected earlier, and the end product more accurately matches actual user needs. For companies and their employees, this results in real efficiency advantages through the use of AI.
European Innovation and Digital Sovereignty in the AI Era
In a time when AI is significantly driving digital transformation, Europe is increasingly establishing itself as a center for trustworthy AI innovation.
“Building our platform on AWS was the logical choice for us,” explains Werner. “The AWS infrastructure not only provides the performance capabilities we need for our visualization and analysis platform – the strong commitment to European digital sovereignty also aligns with our values. As a European company, we develop innovative technologies that make AI-supported code development transparent and traceable. AWS’s consistent alignment with European standards enables us to drive this innovation forward and help companies better understand and control their AI-supported development processes. This way, our customers benefit from cutting-edge cloud infrastructure while maintaining full transparency over their AI-supported development processes.”
Faster development with maintained trustworthiness
Early users of the Leapter platform report significant benefits:
- Reduced verification tn time: Teams need up to 60 percent less time to review AI-generated code
- Improved cross-team collaboration: Business and technical stakeholders can cooperate more effectively through a shared visual language
- Faster market entry: Through optimization of the AI verification process, products move more quickly from conception to market launch
- “The best technology is invisible to users but completely transparent to developers,” explains Werner. “That’s exactly our goal – making AI an understandable, trustworthy component of software development.”
The future of human-AI collaboration
As AI becomes more deeply integrated into our daily lives, transparency becomes increasingly important. “We are convinced that the future belongs to teams where humans and AI work together, each playing to their respective strengths,” says Welte. “AI can generate drafts and handle repetitive tasks at incredible speed. Humans bring judgment, creativity, and ethical considerations.”
Trust is essential for this way of working – and trust requires transparency. By providing tools that make transparent what AI-generated code does and how it works, companies like Leapter are creating a foundation for the next generation of AI-supported developed applications. For consumers, this means that the products and services we rely on can continue to benefit from AI’s capabilities – while ensuring human control that ensures they truly meet our needs.
Leapter is currently in a private beta phase with initial design partners. For more information about their visual-first development platform, visit www.Leapter.com.
To learn more about how companies can increase their productivity, create differentiated experiences, and innovate faster with AWS, click here.