Legal aid leads on AI: How Lone Star Legal Aid built Juris to deliver faster, fairer results

Under resource strain and rising demand, Lone Star Legal Aid built Juris, an AI-enabled tool, to centralize trusted knowledge and share a replicable model with a goal to serve more clients in need of legal assistance

Key takeaways:

      • Legal aid is leading on AI adoption — Legal aid organizations are leading the way in leveraging AI with 74% using AI in their work, driven by the need to serve millions of citizens who lack legal help.

      • Lone Star Legal Aid creates Juris — A new AI-powered tool Juris from Lone Star Legal Aid improves accuracy and trust through retrieval-augmented generation, source-cited answers, and a secure Azure-based architecture with an integrated citation viewer.

      • Keeping costs low — A phased, two-year build-and-test process kept costs low (at about $2,000 a year in infrastructure costs, plus about 300 staff hours) and produced dependable results.


A new study finds that under-resourced legal aid nonprofits are adopting AI at nearly twice the rate of the broader legal field because of the urgency of the need to serve millions of Americans who may lack legal help. The study shows that almost three-quarters (74%) of legal aid organizations already use AI in their work, compared with a 37% adoption rate for generative AI (GenAI) across the wider legal profession. Lone Star Legal Aid (LSLA), a legal aid non-profit serving easter Texas, is one of early adopters of AI.

According to LSLA, its attorneys were spending too much time and money hunting for answers across pricey platforms and scattered PDFs. Key materials lived in research databases, internal drives, and static repositories, while individual worker-vetted documents were not centrally accessible. Without a single, trusted hub, staff experienced slower research time that affected clients through duplicated effort and delays.

These strains are not unique to LSLA. In fact, court help centers and self‑help portals face the same fragmentation, licensing costs, and uneven access to authoritative guidance. A verifiable, consolidated knowledge hub that could stabilize quality while reducing spending would be a needed solution.

To solve this problem, LSLA turned to AI to create a legal tool called Juris built to return fast, source‑cited answers. Juris was designed to centralize high‑value legal materials, cut reliance on expensive third‑party platforms, and lay a flexible foundation that the organization could reuse beyond legal research for internal operations and future client tools.

Multifaceted approach to ensuring accuracy and reliability

There were several aspects of Juris that designers used to help its mission to increase access to justice, including:

Design methods fuel trustworthy output — Juris was built to ensure accuracy using a number of methods, such as a retrieval-augmented generation (RAG) pipeline to ensure the chatbot delivers fact-based, source-cited answers. It also uses semantic chunking, a process that breaks a document into natural, meaning‑based sections (for example, a heading plus the paragraphs that belong to it) so the original context stays together.

When a user asks a question, Juris retrieves only the most relevant of these sections. Limiting the AI to evidence from those passages improves accuracy and reduces hallucinations because the model is not guessing from memory. Instead, it is grounding answers in the text it just accessed.

Solid technical architecture helps reliability — Juris’s technical architecture also ensures reliable results because it combines Azure OpenAI for secure, stateless access to AI models to better handle document ingestion, processing, and vector storage. Users interact through a custom internal web interface that integrates a PDF viewer alongside the chat experience that enables seamless citation and document navigation. The platform is securely hosted on Azure App Service with continuous deployment orchestrated through GitHub, which provides reliable operations and streamlined updates.

Phased approach to building and testing yielded dependability — Also to ensure trustworthy results, LSLA developed Juris by following a structured, phased approach over two years. It began with a concept phase that was focused on clearly identifying the problem, followed by a platform evaluation that compared open-source and commercial solutions. A prototype was then created and demonstrated as proof of concept.

In addition, internal testing included adversarial exercises, hallucination detection, and rigorous validation of citation reliability. Based on these findings, the team implemented enhancements, such as moving from size-based to semantic chunking, improving the interface, and expanding the set of source materials. Juris is now in pilot preparation and undergoing final refinements before its release to a select group of subject matter experts.

Efficient resourcing and sharing learnings

LSLA’s phased method to building and testing also made sure that sustainability was built in from the beginning. Indeed, ongoing maintenance is minimal, and Microsoft’s nonprofit Azure credits keep infrastructure costs around $2,000 per year.

The most significant cost was in staff time. Development so far totals roughly 300 staff hours (or about 0.5 full-time equivalent, plus 0.3 FTE over two years). Once Juris enters phase two, which has been funded by a Legal Services Corporation (LSC) technology initiative grant, expected benefits will include faster, more consistent research and reduced workload for frontline and administrative staff, plus a modular framework that others can adapt.

Other legal service organizations that face similar challenges can learn from the Juris development, testing, and implementation as well as other related case studies. These recurring lessons include:

      • beginning with a small, manageable scope
      • inviting end users in from the start, and
      • carving out protected time so staff can innovate alongside daily duties.

Looking ahead, the LSLA team will continue to roll Juris out in phases, while building sister tools. LSLA also plans to share lessons learned through LSC’s AI Peer Learning Labs to help other organizations replicate the model.

Real change at scale, such as this, will only come from collaborating across organizations to share playbooks, pool datasets, and co‑design tools that lift quality while lowering cost. It is only with such partnership and sharing lessons from early adopters of AI that peers can adapt the model and, together, scale solutions that narrow the justice gap.

Angela Tripp, Program Officer for Technology for the Legal Services Corporation contributed to this article.


Continue Reading