This medium is exactly how you get people in fields/industries with slow tech adoption to use AI. Most jobs in legacy businesses live on email -- these people aren't going to want to learn a sleek/new AI chat tool. They're going to stay on their email client.
Interesting. The novelty here seems to be in the method of interacting with the LLM vs. innovating at the level of the AI workflow. My guess is each email client is essentially just given a custom prompt? Like others are saying, perhaps this can be popular with legacy business, where people don't want to go to a chat window?
Emails an asynchronous interface is an interesting take on chat AI.
Giving out any email address on the main company name sounds risky. Somebody could register hostname@ or ceo@ ?
The FAQ should explain who the conversation is shared with. The generic "data is stored [encrypted]" is only about storage. Is an uploaded PDF sent to a third-party, possibly to train the their next version?
I always hated the streaming text responses in the chat interfaces, just seemed like a cynical attention-hack to continuously update the content instead of just returning the answer once it was done, so I think email makes a lot of sense, it especially makes slow-running local models a bit more bearable, instead of watching it type at 20 words per minute, just get back to me when you reach a stop token. Plus it can attach any relevant files or files it generates. I wrote a chatbot 8 years ago (chatscript dialog trees) that would gather parameters and run a SQL script, returning the resulting table as a .csv you could download. I wish I thought of doing it as an email character, would have integrated with existing processes way more easily than "login to this service we just spun up whenever you want to use it..."
Agreed, definitely some pros to using LLMs in email vs chat. Feels pretty natural to send an email and get a response shortly after with a notification, etc.
Fair points. We have the ability to restrict any address, so shouldn't be much of a concern on our end to shut down ceo@, etc.
We dont use any data for our own training. We send it to Gemini to process the prompt and encrypt it at rest. Having it encrypted at rest allows us to use it as context in your email thread as more replies come in.
That would be a nice way to tag it as context the bot should be aware of without eliciting a response from it, have it respond only when its a direct recipient.
Would be nice to have a natural language stack to tell a bit what I want it to do with each email. I spent way too many hours with Microsoft Power Automate just to say "when an email is from X and the subject line starts with RFQ\W+, download the attachment and save it as the matched name and add a row in a spreadsheet grabbing the tabular data from the email body" (pretty handy that I could insert data into spreadsheets without having to worry about auth, benefits of working within an ecosystem)