Caret
Webhooks
LLM

Connecting The Infinite Cloud to Caret Workflows

June 11, 2025
Cody Lund
4 min read

A couple of months ago, a Caret user reached out with a feature request: "Can I trigger a workflow when a Google Form is submitted?". It sounded simple enough. We exposed a protected REST endpoint on our service, generated a custom URL with embedded secret for the workflow, and asked the user to write a custom script to remap their form properties to a payload matching the workflow inputs.

But the user had never written code before and didn't feel comfortable deferring to ChatGPT. So, we provided a generic script to forward the form data to our endpoint, tagged with a sentinal key, and handled the remapping on our side. Annoying, but still pretty simple.

Then came the follow-up asks: "I also want to trigger the workflow when a new row is added to this Google Sheet. And also when I click this button in Notion. And sometimes from emails."

Each system had completely different payload structures, field names, and data formats. The traditional approach would require teaching non-technical users about webhook scripts and schemas, or building custom mappers for each service on our backend and maintaining a growing mess of integration code.

Instead, we built something we call "magic" webhooks.

Solving for complexity in the land of 10 thousand SaaS

Many workflows follow the "if this, then that" formula. Often "this" and "that" happen on different corners of the internet. Solving for "that" is its own beast, but magic webhooks make for an elegant, generic solution for "this."

Magic webhooks work with any external service that can send HTTP requests. They automatically extract and map incoming data to workflow inputs using structured LLM calls, regardless of the source format. The user just provides a Caret webhook URL, and we handle the rest. The key insight is that when a user wants to trigger a workflow from a webhook, that webhook generally contains the necessary data to run that workflow. It's just packaged differently. E.g. a Stripe payment webhook might send customer.email while a HubSpot contact webhook sends properties.email. Both contain the same information, just with different field names and nesting structures.

When a webhook hits our endpoint, we use a structured LLM call to analyze the incoming payload and extract the relevant data based on the workflow's input schema. The LLM acts as a universal translator, mapping arbitrary JSON and text structures to the expected workflow inputs. This means we can support services we've never seen before, as long as they contain the required data somewhere in their payload.

The same approach works for email hooks. Users can email a dedicated address to trigger workflows, and we use LLM extraction to pull structured data from the email body, subject line, and even attachments. E.g. an email saying "New customer: John Doe, [email protected], paid $500" can be automatically parsed into separate fields for name, email, and amount.

Non-technical users rejoice

The traditional webhook integration flow looks something like this: read the external service's API documentation, understand their webhook payload structure, write transformation logic to map fields to Caret workflow inputs, test edge cases, and maintain the mapping as APIs change. That's a lot of work, and it requires deep technical knowledge. Most of Caret's users come from non-technical backgrounds, and are looking for an experience that "just works."

Magic webhooks eliminate most of this complexity. Users don't need to understand JSON structures, field mappings, or data transformations. They just need to know what data their workflow expects and trust that Caret will find it in the incoming payload. The LLM handles the technical details of parsing nested objects, handling different field names, and extracting the right values.

Magic ain't free

While this approach has rapidly increased our compatibility with external services, it has a number of shortcomings:

  1. LLM extraction is only as good as the instructions and incoming data. Field names can be ambiguous, data may be sparse or missing, or the LLM may just misinterpret the intended mapping. Handling these requires careful prompt engineering and validation rules which can be tricky for our non-technical users to get right.

  2. We also run into issues with very large payloads that exceed LLM context limits or bury the desired data. Unfortunately, we have to trim payloads to keep costs reasonable, which can confuse users.

  3. The cost of LLM calls for each webhook or email also adds up, especially for high-volume use cases. We've had to optimize our prompts, use cheaper models where possible, and price our subscriptions accordingly.

All that said, magic webhooks unlocked a seemingly infinite number of possibilities with just a few hours of engineering work. Trading cost for flexibility has been totally worthwhile for a budding product like Caret.


Interested in implementing AI-powered integrations for your workflows? Schedule a consultation with our team to explore how magic webhooks and email hooks can simplify your automation processes.