I set up an email triage system using Home Assistant and a local LLM, here’s how you can too

Home Assistant is an incredibly powerful piece of software, and it can do a lot more than just link up your hardware from various vendors into one dashboard. You can link practically anything you can think of, from the software running on your PC to games like Counter-Strike​​​​​​. One powerful integration is the IMAP integration, which allows you to link your Home Assistant instance to your email address, with the ability to process every incoming email as you see fit. I took this functionality and turned it into a personal email triage system, all using Home Assistant and a local LLM.

I’m running Ollama on an AMD Radeon RX 7900 XTX with 24GB of VRAM, using the 8B dolphin-llama model for summarization and a context length of 32,768. Home Assistant sends a query with each incoming email, and a JSON object is returned with a category, summarization, and priority level. What you can use will depend entirely on your hardware, but even a smaller model will do fine at summarizing an incoming email once it has enough context length to accurately refer to information contained in the email.

For those looking to follow along, this article assumes that you have Ollama or LM Studio configured and ready to go in order to accept queries over an API.

Why build an email triage system?

Making emails more digestible

I don’t know about you, but my email inbox can be an overflowing mess at the best of times, and it can be hard to keep on top of. I try to unsubscribe from newsletters when I can, but some of them are either work-related or ones that I care about sometimes, though not all of the time. I was trying to figure out if I could make this process even a little bit easier on myself, and I figured I would leverage the power of a local LLM to do just that.

Home Assistant, with its IMAP integration, can pull every email that comes through a designated server, complete with the email content if you want. This is a mess of HTML, usually, and one email’s HTML layout can be entirely different from another you receive a few minutes later. There’s no one-size-fits-all approach to parsing incoming emails by hand or by using regex… but what about an LLM?

LLMs don’t do any logical reasoning or analysis, but they’re great at text and pattern recognition. With an increased context window, we can leverage an LLM’s pattern recognition to generate summaries for every incoming email. It won’t be perfect, and it won’t always be fully accurate, but it can provide an overview and the original subject line in a notification so that we know to check it ourselves if we need to.

As to why we use a local LLM, the reason is simple. Do you really want every email you receive to go to a cloud-based LLM for processing? In the name of privacy, I’m definitely not comfortable with that. Do keep in mind, though, that this doesn’t replace checking your inbox manually. You still should do that every now and again. However, this has enabled me to reduce how often I check my emails, which is fantastic. Plus, I get categories alongside every email that goes through Home Assistant, so I can see statistics of what types of emails I get, too.

Setting up our LLM triage REST command

Sending the email from Home Assistant to Ollama

home-assistant-ai-email-triage-counter

We’ll build two separate components for this: the first is a REST command that goes to our Ollama server, and the second is the automation that takes the output, sends it to our phone, and increments a counter in Home Assistant for each category.

Here’s the full rest_command to add to your configuration.yaml file (make sure to use proper indentation), which is also available on GitHub:

rest_command:
llm_email_triage:
url: "http://192.168.1.81:11434/api/chat"
method: POST
headers:
Content-Type: "application/json"
payload: >
{{ {
"model": "dolphin-llama3",
"stream": false,
"keep_alive": "24h",
"messages": [
{
"role": "system",
"content": "You are an email-triage assistant. Read the email JSON, then return ONLY JSON matching the schema."
},
{
"role": "user",
"content": (email_payload if email_payload is string else (email_payload | to_json))
}
],
"format": {
"type":"object",
"properties":{
"priority":{"type":"string","enum":["P0","P1","P2","P3"]},
"category":{"type":"string","enum":["personal","transaction","calendar","newsletter","promo","alert","receipt","support","unknown"]},
"summary":{"type":"string"},
"actions":{"type":"object","properties":{
"archive":{"type":"boolean"},
"move_to_folder":{"type":"string"},
"snooze_until":{"type":"string"},
"create_task":{"type":"object","properties":{"title":{"type":"string"},"due":{"type":"string"}},"required":["title"]}
},"additionalProperties":false},
"confidence":{"type":"number"}
},
"required":["priority","category","summary","actions","confidence"],
"additionalProperties":false
},
"options": {
"temperature": 0,
"num_ctx": 32768
}
} | to_json }}

The above is a fairly simple REST command to the Ollama API, and it sends the following details in JSON format to the model:

  • Designate the dolphin-llama3 model, which is an 8B parameter model
  • Designate that it should be held in VRAM for 24 hours to ensure that there are no spin-up times whenever an email comes through
  • A system prompt: “You are an email-triage assistant. Read the email JSON, then return ONLY JSON matching the schema.”
  • The email, either in string format or JSON format if the email is in JSON format
  • A response format that must have the following fields:

    • Priority
    • Category
    • Summary
    • Actions

      • Archive
      • Move to folder
      • Snooze until
      • Create task
    • Confidence
  • Context window of 32768, or 32K

The response format uses Ollama’s “structured outputs” feature, which constrains all responses to fit the template given. The quality of summarizations and actions that fit what you’ve provided will be directly affected by the model you choose and the context size, and an 8B model with 4-bit quantization and a 32K context window requires approximately 15GB of VRAM.

Once this is added to your configuration.yaml, pointing to your Ollama instance, you can restart Home Assistant, and the new REST command will be added as a callable action. The actions we show are also to allow you to implement your own actions based on the LLM’s recommendations if you’d like, though I haven’t implemented them.

Setting up our automation to summarize our emails

Processing the response

home-assistant-ai-email-automation

The automation, also found on my GitHub, is fairly simple, though the syntax can be a little bit difficult to get right. The flow basically looks like this, once an imap_content event is triggered:

  • Fetch the message, save it to our “mail” variable
  • Save an “email_payload” variable, which is a JSON object containing:

  • Call our previously defined REST command, inserting the “email_payload” variable, and saving the response to a “triage” variable
  • We create a “triage_obj” variable, which is accessed by:

    • triage[‘content’]: The HTTP response body
    • [‘message’][‘content’]: The Ollama /api/chat response, which lives at message.content
  • We capture the category from the object response and designate our counters as mappings
  • We increment the counter appropriate to the returned category
  • We notify a device with the subject line of the email, the summary of the email, and a priority

    • We create two actions: “SNOOZE_1H” and “ARCHIVE,” though keep in mind that these actions need to be implemented yourself. These are given as an example of what you could do with these notifications.

Once an email is received, a few seconds later, the imap_content event is triggered, which kicks off the entire flow. A few seconds after that, given the small size of the model and the 24-hour keep-alive that we pass as part of the request, I get a notification on my phone containing a summarization. I can then go to my “emails” dashboard to see a counter for each. You’ll need to create these counters yourself, and these are:

  • personal: counter.emails_personal
  • transaction: counter.emails_transaction
  • calendar: counter.emails_calendar
  • newsletter: counter.emails_newsletter
  • promo: counter.emails_promo
  • alert: counter.emails_alert
  • receipt: counter.emails_receipt
  • support: counter.emails_support
  • unknown: counter.emails_unknown

You can reset them individually or create a basic script in Home Assistant that will reset all of them for you. You can add a button to a dashboard that calls this script to reset them.

Home Assistant is an incredibly powerful tool capable of a lot, and there’s a lot of room to join up different pieces of software into one cohesive unit. I’ve linked my GoXLR audio interface from my PC to Home Assistant so I can use a fader to control my lights, and I’ve connected Uptime Kuma to my office light so that it can flash red when my services go down.

There are many ways to make it work for you without bespoke hardware, and this is just one of the many ways you can do just that. The GitHub repository containing this project also shows an example of how you would have tasks automatically populate a to-do list in Home Assistant.

Continue Reading