<![CDATA[Jam.dev blog]]>https://strawberryjam.ghost.io/https://strawberryjam.ghost.io/favicon.pngJam.dev bloghttps://strawberryjam.ghost.io/Ghost 5.130Wed, 15 Oct 2025 18:47:37 GMT60<![CDATA[Making Support More Technical: Appcues’ Ricky Perez on the Future of AI in Customer Support]]>https://strawberryjam.ghost.io/how-appcues-customer-support-uses-ai/68eeaf70cd9b950001a5f58cMon, 13 Oct 2025 20:22:00 GMT

Ricky Perez is the Director of Support at Appcues - a platform that helps product teams deliver better user onboarding and in-app experiences.

We spoke to Ricky about how AI is reshaping customer support, why he believes every support team should get more technical, and how AI can turn support into a proactive partner to engineering.

Here are the highlights!

Support should get more technical

Ricky’s north star is simple: the less non-engineering work you put in front of engineering, the faster the hard problems get fixed.

“Can we free up bandwidth from our engineering team so they have to worry about zero support issues?”

For him, that means making support more self-sufficient. Teams need to have the ability to debug, identify misconfigurations, and even fix small bugs on their own.  

An autonomous support team gives engineers their time back: to focus on harder technical problems and ship new features. 

But that’s not the only benefit of having a technical support team. When certain issues do need to be handed off to the engineering team, these handoffs become way more efficient.   

When a customer reports “my flow isn’t working,” the support team already knows which flow, which settings, and which users are affected - complete with session replay and event logs.

That context means support can hand engineers a near-ready diagnosis: bug vs. misconfiguration vs. feature request. Engineering spends less time reproducing issues, and support spends less time waiting.

Where AI helps today

According to Ricky, there are three high-impact areas where AI already adds value:

  1. Summarization: Long ticket threads can overwhelm new responders. A first-pass AI summary helps humans see what’s happening faster.
  2. Triage. AI can identify which tickets belong to support, engineering, or product - cutting down on misrouted issues.
  3. Deflection. Thoughtful, context-aware auto-replies that handle the top 20% of common requests let the team focus on deeper investigations.

What AI shouldn’t do, Ricky argues, is replace humans in escalations. “If a customer is angry or stuck, they want to be heard.” AI can route and prep the case, but escalations still require a level of patience and empathy that only a human can provide. 

AI helps support teams be proactive instead of reactive

Ricky believes the real promise of AI is stopping tickets before they’re created.

Appcues recently launched Captain, an in-app insights agent that surfaces configuration issues and product usage patterns. The long-term goal is to have Captain flag broken or unseen experiences before a user ever files a ticket.

In Ricky’s words: “Stop the spark before it becomes a fire.”

By combining in-product telemetry with AI alerts, the team can turn support from a reactive function into a proactive one that prevents frustration rather than just resolving it.

Ricky’s hiring philosophy: empathy first, tech second

AI is lowering the barrier to entry for technical work. Tools like ChatGPT, Claude, and Lovable make it easier for anyone in support to learn CSS, SQL, or API debugging.

That’s why Ricky hires first for empathy and curiosity, not credential: “If you pass the vibe check, we can teach you the tech.”

The mix he’s after: emotionally intelligent communicators who love solving puzzles, and are comfortable using AI to learn new technical skills on the job.

Actionable takeaways for support leaders

  • Level up technical depth. Teach team members basic debugging and log analysis; create a “support engineering” path. It's easier than ever before. 
  • Choose AI-capable tooling. Your help desk is now a platform for automation - pick one that supports experimentation.
  • Automate the first mile. Summaries, triage, and deflection save time, but humans handle escalations.
  • Go proactive. Use product telemetry and AI alerts to fix misconfigurations before tickets are submitted.
  • Package context for engineering. Auto-generate reproduction steps and clear categorization (bug, config, feature) and save your engineers’ time. 
  • Hire for curiosity. AI closes the skills gap, but empathy and problem-solving still win.

Support is evolving fast: from a reactive function to a technical, proactive partner that keeps engineering focused and customers happy. Ricky’s approach to this shift aligns with many other support leaders we’ve spoken to for this series: hire curious and empathetic humans, and use AI to empower them.

We had such a great time jamming with Ricky! We’ve been having similar conversations with support and engineering leaders at orgs like Monday.com, Intercom, and Productboard to unpack how they actually build with AI. You can find previous episodes on YouTube.

]]>
<![CDATA[How Vanta Builds with AI, with VP of Engineering Iccha Sethi]]>https://strawberryjam.ghost.io/how-vanta-builds-with-ai-with-vp-of-engineering-iccha-sethi/68deefc2fb4df30001f8cfcdThu, 02 Oct 2025 21:44:30 GMTVanta helps companies like Atlassian, Cursor, Notion, and GitHub automate security and compliance. We spoke to Iccha Sethi, their VP of Engineering, to learn how her team is deploying AI in their product and across their internal engineering workflows.

Here are the highlights from our conversation.

AI as a co-pilot, not a replacement

Vanta’s engineers use AI across multiple parts of the dev cycle - from test generation to writing RFCs and postmortems. But Iccha believes AI’s value today is in augmentation, not automation. 

Tools that do code generation can be a hit or miss. Real-world codebases have years of patterns baked in - and AI tools aren’t yet thinking about how a team wants to uplevel or evolve those patterns.

Instead of expecting AI to replace code gen, Iccha’s team treats it as a thought partner: helping them brainstorm, debug, and document faster. The near-term impact is felt in speed and clarity, not yet in fully automated coding.

Where AI drives real velocity gains: testing and communication

When we asked Iccha which part of the dev cycle AI is most helpful for, she mentioned testing and communication.

We use a third-party vendor to generate unit tests for older parts of our codebase to improve test coverage - and AI’s been great for that.
ChatGPT is also a great brainstorming partner. I can start with a few bullets, and it helps me flesh things out or spot gaps.

By focusing AI on repetitive, but essential tasks - like test generation and documentation - the team compounds small efficiency wins without compromising quality.

How AI has changed Vanta’s hiring process and priorities

AI is changing how Vanta hires and evaluates engineers. Given how fast the rate of change is, Iccha thinks engineering leaders should almost disregard whether a candidate knows the latest frameworks, and focus on how open they are to learning and experimenting.

The biggest thing I’m looking for in new hires is how willing they are to adopt and try new tools. Growth mindset has become more important than ever.
There are people who say, every new tool is a distraction. I need more open-mindedness. Let's try it, and if it doesn’t work, fine.

According to Iccha, the best engineers are curious enough to explore, but disciplined enough to not chase every shiny object.

Vanta has also changed their interview process to align with the times, given that every engineer now has AI copilots at their disposal. 

We’re discussing how to make our interviews reflect the real world - where engineers have access to AI tools. The idea is to let candidates use Cursor or Copilot during interviews and evaluate how they leverage them.

The company is also exploring ways to prioritize code review over code generation. 

Code review becomes a more important skill than ever with AI code generation.

As more code is written by machines, human judgment: deciding what’s good, safe, and maintainable becomes the differentiator.

AI helps engineering leaders get close to the codebase again

AI has brought engineering leaders like Iccha closer to the codebase by freeing up their time and taking care of mundane, repetitive tasks. 

“Cursor empowered me to be closer to the code and more self-sufficient. Over my winter break, I built a TypeScript program to analyze incident postmortems with LLMs - something I’d normally ask an engineer to do.”

AI gives leaders hands-on visibility into problems they’d otherwise have to delegate. That creates tighter feedback loops between management and execution.

Velocity depends on seniority

According to Iccha, AI’s impact varies based on experience level. Iccha thinks AI makes senior engineers faster, and could actually make junior engineers slower. 

The more senior the engineer, the more innovative they are about how to leverage AI to unblock themselves. For junior engineers, sometimes the opposite happens - they don’t yet know what ‘done’ or ‘good’ looks like, and so AI can actually slow them down.

She also mentioned how senior engineers in her team lead by example, documenting AI workflows, sharing prompt libraries, and mentoring juniors on how to verify rather than blindly trust AI output.

Takeaways for engineering leaders:

  • Start narrow: Deploy AI where the ROI is obvious. In Vanta’s case, this was testing and documentation.
  • Hire for curiosity: Growth mindset is a better predictor of success than AI tool fluency (which is bound to shift over time)
  • Redesign interviews: Test for fundamental technical acumen, but also let candidates use AI and evaluate how they prompt and review. 
  • Invest in review culture: Great engineers spend more time auditing code than writing it. This will become increasingly critical as machines write most of our code. 

Vanta’s approach shows what it looks like to integrate AI into an org with a strong engineering culture. Done right, AI can help engineering teams become thoughtful, fast, and self-sufficient.

We had a great time jamming with Iccha, and hope her playbook helps leaders in their organizations.

You can watch our full conversation with Iccha on YouTube. You can also check out previous conversations with product and engineering leaders at Intercom, Monday.com, Vercel, and more.

]]>
<![CDATA[How Monday.com Builds AI Products That Teams Actually Use]]>https://strawberryjam.ghost.io/how-monday-com-builds-ai-products-that-teams-actually-use/68deecc1fb4df30001f8cfb1Mon, 29 Sep 2025 21:27:00 GMT

Monday.com is one of the world’s largest productivity suites, used by teams at orgs like McDonald’s, Coca-Cola, Canva, and 60% of the Fortune 500. In the last year, they’ve gone all-in on AI, shipping three major new products, including Sidekick, their new assistant that helps teams go beyond managing work to actually getting work done. 

We spoke to Or Fridman, AI Group PM at Monday.com, to learn how his team builds with AI.

Here are the highlights!

Co-create with users (while avoiding scope creep)

When Or’s team set out to add AI features to Monday.com, they ran iterative experiments that combined two approaches:

  • Nail one job. Build a narrow feature that solves a concrete problem really well (like updating items on a board).
  • Leave room for discovery. Give users a blank canvas to explore new use cases, then fold the most valuable behaviors back into the product.

The result was a product that has evolved with user needs, based on real usage - instead of adding AI for the sake of AI.  

Building AI products is different

Traditional features are deterministic: inputs and outputs are defined in advance, and every user action maps to a predictable result. In contrast, AI features are non-deterministic. The same prompt can return different outputs, which means PMs have to design for variability.

At Monday.com, that forced the team to rethink the PRD. Instead of a user story that says “when the user clicks X, show Y,” they had to spell out questions like:

  • What does “good” look like? If a user asks about sprint performance, should the assistant return a paragraph summary, a bulleted list, or a chart of metrics? Each choice implies different UX affordances and evaluation criteria.
  • What controls keep users in the loop? Do they get a regenerate button, the ability to edit outputs inline, or an explicit undo path if the system makes a change?
  • How to establish trust? Should results be paired with citations back to the source data? When should the model hedge or ask for clarification instead of answering confidently?

The basics of product still apply but the scope of the spec is larger. AI PRDs look less like feature tickets and more like data-product specifications: defining the model’s inputs, the expected shape of outputs, the evaluation metrics, and the safety mechanisms around them.

PMs are no longer just shipping features, they’re shaping probabilistic systems.

Compressing feedback loops

AI gives product teams a way to significantly speed up product feedback loops. At Monday.com, PMs use AI-assisted tools to brainstorm, prototype, and even “vibe-code” mockups. Instead of the old linear flow - PM writes a PRD, designer mocks it up in Figma, engineer codes - those steps all blur together. A PM can draft a rough flow in minutes, vibe code a working prototype, test it with users, and refine it alongside design and engineering.

In a world where model capabilities evolve every few weeks, that speed is essential. 

What skills do they look for in PMs? 

The fundamentals of a good PM haven’t changed: be user-focused, strategic, collaborative. But Or adds one more requirement: adaptability. AI evolves weekly, not yearly. The best PMs are the ones who can keep pace, experiment, and fold new capabilities into the product without losing sight of user value.

We asked him if companies should hire PMs who specialize in AI, and Or’s answer was clear: better to hire strong product thinkers who can learn AI than hire for a narrow set of tools that may change in six months.

Looking five years out

Or expects Monday.com to evolve from being the place where teams manage work to the place where they also execute it. He also sees interaction modalities shifting: chat, questions, voice, and images will replace clicks and menus as the primary interface for work.

For a marketing manager, that could mean not just planning campaigns inside Monday, but generating assets and running them end-to-end without leaving the platform.

Actionable takeaways for builders

  1. Co-create with users (without losing focus). Involve users early to discover how they want to use AI, but anchor the product around a single, well-defined workflow.
  2. Account for non-determinism in PRDs. Define inputs, expected output shapes, trust mechanisms, and evaluation criteria upfront.
  3. Compress feedback loops. Use AI internally to turn ideas into prototypes quickly so users can guide what’s worth building.
  4. Hire for adaptability. Prioritize PMs who can learn fast over PMs who happen to know the current thing. 
  5. Design for new interfaces. Natural language, chat, voice, and images are becoming the default ways users interact with software.

We had such a great time jamming with Or! We’ve been having similar convos with product and engineering leaders at top orgs (like Intercom, Honeycomb, and Vercel) to unpack how they actually build with AI. You can find previous episodes on YouTube.

]]>
<![CDATA[Figma→Video AI workflow]]>https://strawberryjam.ghost.io/ai-animations/68d40f26c1dd000001327747Wed, 24 Sep 2025 15:40:09 GMTToday we launched Light Mode 🔆 in the Jam mobile app! Was excited to launch it and wanted to make a nice graphic to show everyone using the app what's new. I made it with Figma + AI tools (Claude + Veo3 Flow):

0:00
/0:06

Was really easy to do, here's how you can make your own app screen animations using AI:

Step 1 – I started by exporting a Figma frame of the app.

Step 2 – I uploaded the Figma frame to Claude, and told it what I was trying to accomplish and asked for a prompt.

Step 3 – I uploaded that frame + prompt to Veo3 Flow's Frame to Video tool.

Step 4 – Done! Light Mode, on! 🔆

Here's the prompt used (thanks Claude). It's a little cringe when you read it as a human but it worked well for AI!:

```
Create a premium iOS app launch animation that transforms these product stills into a cinematic reveal. The animation should be elegant with sophisticated 3D camera work and lighting.

Visual Treatment:
Phones float weightlessly in 3D space with gentle rotation and subtle parallax depth
Dramatic lighting shifts from soft ambient to focused spotlighting on key moments
Premium materials: subtle reflections on glass, soft drop shadows, elegant glows around active elements
Smooth easing curves throughout (ease-in-out, no linear motion)

Camera & Movement:
Graceful orbital camera movement around the interfaces
Shallow depth of field that guides focus
Cinematic framing with negative space
Seamless transition between the two app states

Constraints:
DO NOT add new UI elements - only animate existing interface components
Maintain authentic app functionality and visual hierarchy

Tone:
Sophisticated, premium, worthy of a keynote reveal.

```

]]>
<![CDATA[How AI Impacts Customer Support, with Productboard’s Head of Global Support]]>https://strawberryjam.ghost.io/how-ai-impacts-customer-support-with-productboards-head-of-global-support/68def313fb4df30001f8cff5Fri, 19 Sep 2025 21:48:00 GMT

Pavel Malyshev leads global customer support at Productboard - a product management platform used by orgs like Autodesk, Zoom, and Salesforce. 

We were especially excited for this conversation because Productboard’s approach mirrors a trend we’ve been watching closely: AI is pulling engineering and support closer together.

At Productboard, support and engineering operate as one system. Pavel shared how his team uses AI to shorten resolution times, streamline escalations, and surface the right context so engineers can fix issues faster.

Here are the highlights from our conversation.

Resolution times are what ultimately matter 

Many companies obsess over deflection rates as the north star for support success. Pavel thinks that approach is wrong. Customers don’t care about deflection rates. They care about how quickly their problems are resolved. 

Deflections don’t necessarily mean you get the best customer experience. Resolutions do.

Everything Pavel and his team do is oriented towards reducing resolution times and interaction counts. To that end, they use AI to triage issues better and escalate them smarter. 

The faster an issue reaches the right engineer with the right context, the faster it gets solved. That’s where AI is making the biggest impact: shortening time-to-resolution by reducing back-and-forth interactions.

His advice to support leaders: Don’t over-index on deflections. Track resolution times and interaction counts. If AI can cut a four-message ticket down to one or two, that’s the metric that matters.

Closing the gap between support and engineering 

At Productboard, support and engineering are tightly linked: both measured by product outcomes.

When a ticket comes in, Pavel’s team uses AI to enrich it with technical context, so engineers can get straight to fixing the problem instead of diagnosing it.

AI helps surface patterns, context, and even hints from the code itself, giving engineers the clarity they need to resolve issues faster (and freeing support from endless back-and-forth).

The golden age of DIY 

Off-the-shelf solutions like Intercom’s Fin have been useful for async replies, but Pavel thinks DIY is where the real leverage is. His team is constantly experimenting with Zapier, Make, and n8n to craft lightweight internal automations.

Whether it’s QA checks, sentiment analysis, or custom copilots, the ability to stitch together tools means teams don’t have to rely on other companies to ship features that are hyper-specific to their workflow. 

Pavel recommended starting small. Pick a single pain point (like analytics or QA) and automate with low-code tools before scaling up.

Support tickets as product feedback loops 

Support is a goldmine of customer feedback, but most orgs only analyze a fraction of their tickets. Pavel sees AI as a way to unlock the full potential of support tickets.

By running tickets through LLMs, his team can generate trend reports, voice-of-customer insights, and targeted product feedback. These insights help the product team prioritize, and give support a louder voice in the product development loop.

Even a simple pipeline that summarizes themes from 1,000 tickets can give PMs more insight than quarterly customer interviews.

How support jobs are evolving with AI

There’s a lot of noise online about AI completely replacing support roles. Pavel disagrees. Instead, he sees two paths emerging:

  1. Human support becomes white-glove: support roles evolve into CSM-like roles, delivering white-glove experiences and uncovering “aha moments” for customers.
  2. AI orchestration: ops-driven roles where support reps design, tune, and oversee the automations powering support at scale.

Either way, the baseline skill set is changing. Critical thinking, analytics, and the ability to “speak AI” will soon matter more than fast typing or handling simple tickets.

The Bigger Picture

What Pavel sees at Productboard is a glimpse of where customer support is headed:

  • Resolution quality > deflection rates
  • Increased collaboration between support and engineering
  • DIY AI ops as a competitive advantage
  • Feedback loops powered by LLMs
  • New roles focused on orchestration and strategy

The core truth remains: customers don’t care how many tickets you deflected. They care how quickly and accurately their problem gets solved. 

AI isn’t replacing support, it’s redefining what great support looks like.

We had a great time jamming with Pavel! We’ve been having similar conversations with product and engineering leaders at top engineering orgs (like Vercel, Wix, Intercom, etc) to unpack how they actually build with AI. You can find previous episodes on YouTube

]]>
<![CDATA[Beyond the Hype: Honeycomb's CTO on How AI Really Impacts Engineering Teams]]>https://strawberryjam.ghost.io/honeycomb-cto-ai-speedrun/68cb0de83c8e1000013df785Wed, 17 Sep 2025 20:18:41 GMTCharity Majors is the CTO of Honeycomb, an observability platform used by product teams at the world's top tech companies, like Dropbox and Intercom.

As a part of our AI Speedrun series, we asked her how her team at Honeycomb builds with AI. Below is a recap of the highlights from our conversation.

AI means experimenting more 

We asked Charity where AI is actually accelerating engineering. For her team, especially on front-end and design, it’s in the ideation phase.

The ideation phase, when you're throwing away a lot of code anyway, has gotten a lot faster.

Before, someone might spend a whole week going down one path just to realize it doesn’t work. Now, they can do that in an afternoon. Honeycomb’s engineering velocity has improved because they’re spending more time on the right things.

But how do you know it's the right thing? You ship it to prod.

At Honeycomb, production is the ultimate source of truth. AI helps get features into users’ hands faster. And if it doesn’t land in prod, it doesn’t count.

Great engineers are great communicators

AI has changed a lot at Honeycomb, but one thing it hasn’t changed is how they hire. Many orgs are focusing on specific AI oriented skills, but the team at Honeycomb is sticking to what they’ve always prioritized: communication.

We've always indexed heavily on communication skills. That’s never been more true than with AI.

During technical interviews, engineers get the usual take-home assignment, but then they also have to explain their decisions, trade-offs they made, and their overall thought process behind the code. According to Charity, engineers who can articulate clearly are engineers who can actually leverage AI meaningfully.

The future of software: disposable vs. indispensable

Charity thinks software will fall into two distinct types:

  1. Disposable software: quick experiments vibe-coded rapidly and discarded just as easily.
  2. Indispensable software: critical infrastructure that has to be reliable, with robust observability and maintenance. 

There is a future where both these categories exist in parallel. AI has enabled disposable software - fast, experimental, low-risk tools - but indispensable software will still reign, especially in domains where there's small room for error: logistics, healthcare, etc. This category of software will always require skilled engineers and SREs to ensure reliability and safety.

There's always going to be a place for cynics in the engineering world—those who ask, ‘What's going to happen when this crashes into reality?’

The future of engineering tools

Builders will increasingly rely on one integrated environment, likely agent-driven IDEs via MCPs or some other protocol that keeps them in flow state and reduces context-switching.

There’s a lot of noise online around when this might actually happen and what engineering orgs should do about it, but Charity doesn’t care about the hype. She’s just focused on setting up the right workflows, instead of obsessing over standalone interfaces. If her team has the right workflows in place, they can plug in to any future agentic protocol or platform fairly quickly. 

The most important surface for engineers is their workflow. If you focus there, you'll be fine.

We had a blast jamming with Charity! Check out similar conversations with product and engineering leaders at top engineering orgs (like Vercel, Wix, Intercom, etc) to unpack how they actually build with AI. Full episodes on YouTube

]]>
<![CDATA[Launching today! Report bugs from mobile, fast]]>https://strawberryjam.ghost.io/launching-today-jam-for-ios/6851790fbfaaa700017f6eafTue, 17 Jun 2025 14:25:20 GMT

Meet Jam for iOS, the first screen recorder made for bug reporting on mobile.

You can record your phone screen, and Jam grabs all the tech details for you (OS, battery life, etc). Share straight to your bug tracking tool like Linear or Jira. No more emailing screenshots to yourself like it’s 2024.

We are so excited for you to try it. Watch it in action.

Launching today! Report bugs from mobile, fast

PS – Stay tuned, Android is up next.

PPS – Shoutout and thank you to the 193 of you in the early access community who built this with us. You rock, happy launch day!

]]>
<![CDATA[Recording Links: The Nitty Gritty Details Behind Today's Launch]]>https://strawberryjam.ghost.io/just-launched-recording-links-magic-links-for-bug-reports/684092995e35cb000175bf4dWed, 04 Jun 2025 19:00:17 GMT

Earlier today we announced Recording Links, the easiest way for Product teams to capture screen recordings, repro steps, and user feedback directly inside their app, without requiring a Chrome extension or install.

To do so, we had to:

  • Design a simple + attractive recorder UX you’d happily send to your users
  • Make sure it works reliably and in every major browser vendor
  • Keep the install process as simple as it can be, and no simpler

As one of the authors on this project I’m clearly biased, but I think the solution we’ve shipped is just shy of magic. And although magicians never reveal their secrets, I write software and have been asked to cook for a minute about our work. So I’ll just tell you: we did it all with <iframe>s.

Recording Links: The Nitty Gritty Details Behind Today's Launch
The capture frame our scripts inject on your website

But before we go too deep: try Recording Links yourself, or check out this demo Ian recorded!

Prior Art

Recording Links builds on Jam’s core extension and more-recent Jam for Customer Support (Intercom) products. Both of these are built on Jam’s core object model and capture stack, which allow us to deliver a consistent debugging experience in our own app and the others we integrate with.

Recording Links: The Nitty Gritty Details Behind Today's Launch
Example from a Jam's "share page"

While each of our products have somewhat different feature sets, Jam’s customers expect a few core promises from our products:

  • Users can easily start, stop, restart, and submit recordings
  • Events must be captured between start of recording and end of recording
  • We can’t lose data along the way, lest our users (or worse, yours!) lose trust in us

Fulfilling these promises in the extension can sometimes be a challenge. But, since users’ data is local, it’s at least reasonably straightforward to fulfill our promises.

This was comparatively difficult in our Intercom product. Specifically, that product sends users to a Recorder hosted at recorder.jam.dev, which—due to browser security constraints—uses a websocket to communicate with event capture scripts installed on our users’ sites.

The websocket approach works fine enough, but has room for improvement:

  • If our socket server is unavailable—e.g. briefly during deploys, or a user is recording while offline—created Jams may be corrupted or missing data; there is no way for the recorder script to recognize this state and inform the user.
  • Common Internet chaos—e.g. slow connections, out-of-order packets, unexpected user and/or script interactions—requires mitigation, thereby increasing code complexity and ongoing maintenance
  • It literally doesn’t work in Safari—and may someday not work in other browsers—because it relies on third-party cookies to establish a shared identifier

One of our design goals for Recording Links was to remove this network dependency altogether. The only time we expect to hear from your users is when they’re explicitly submitting a Jam.

State Partitioning

This lofty design goal introduces a classic problem: “how do we communicate between tabs locally?”. Most browser primitives (e.g. localStorage, BroadcastChannels…) are restricted to same-origin communication. Only Safari is so strict (Chrome and Firefox restrict to same-domain), but since same-origin is our lowest-common denominator let’s stick with it.

Our earlier architecture—a Jam-hosted recorder.jam.dev URL—can’t communicate directly with scripts on pages you host, because it’s a different origin. But what if we put an <iframe> on your pages from the same-origin as our recorder? Well, that’s not so simple. Here are @kentcdodds and @ryanflorence discussing the issue in Oct 2023:

Recording Links: The Nitty Gritty Details Behind Today's Launch
@kentcdodds and @ryanflorence discussing iframe peculiarities [x.com]

All browsers use a technique called “State Partitioning” to prevent cross-site tracking. What this means practically is that recorder.jam.dev frames embedded on example.com belong to a “state partition” keyed by the top-frame’s origin (i.e. example.com:recorder.jam.dev), whereas our top frame is in the top-level partition (i.e. recorder.jam.dev).

Recording Links: The Nitty Gritty Details Behind Today's Launch
The top-level recorder.jam.dev cannot communicate with the embedded one

We started by exploring how to bring your origin to our contents, e.g. by asking installers to set up a recorder.example.com subdomain and point it to our content. But since Safari is strictly same-origin, we cannot assume that recorder.example.com can natively communicate with example.com. In fact, doing so would fail our expectation of supporting all major browser vendors.

Recording Links: The Nitty Gritty Details Behind Today's Launch
recorder.example.com is same-domain but not same-origin to example.com

Our next idea was to bring our contents to your origin, by embedding both the Capture and the Recorder scripts onto your pages via iframe. This way, both capture.js and recorder.js scripts would live in the example.com:recorder.jam.dev partition, and could communicate to each other!

Recording Links: The Nitty Gritty Details Behind Today's Launch
Both scripts embedded from recorder.jam.dev are in the same state partition

The rest of the work was not easy—try building an app that works equally well in embedded vs. non-embedded states, or reliably streaming video content across an iframe boundary in all major desktop browsers—but it was mostly app work, on top of infrastructure we could trust.

Managed Embeds

Having determined our architecture requirements, we also prioritized figuring out exactly how we would distribute our iframes. We had previously considered a few other mechanisms—the CNAME approach of course, and a “proxy recorder” where embedders’ servers fetched and served our content directly—but there were only two meaningful mechanisms in our iframe bucket:

  • DIRECT EMBED—where embedders directly embed an iframe with a URL we provide:
<html>
  <head>
    <title>Recorder Page</title>
    <!-- YOUR PAGE'S `<head>` -->
  </head>
  
  <body>
    <iframe src="https://recorder.jam.dev/recorder/PASS-THE-ID"
      sandbox="allow-scripts allow-same-origin allow-popups">
    </iframe>
  </body>
</html>
  • MANAGED EMBED—where embedders use a script we provide to mount our iframe:
<html>
  <head>
    <!-- on all capture-able pages -->
    <script type="module" src="<https://js.jam.dev/capture.js>"></script>
    
    <!-- at least on recorder pages -->
    <script type="module" src="<https://js.jam.dev/recorder.js>"></script>
  </head>
</html>

We also wanted to make sure that Jams recorded on your pages only get sent to you—you don’t want someone from evil.com to spoof an example.com Recording Link and steal the privileged network and console logs from an unsuspecting user.

To achieve this, we needed to design a domain verification strategy. We considered three options:

  • Over DNS—configure a TEXT record with a value we expect
Recording Links: The Nitty Gritty Details Behind Today's Launch
  • Over HTTP—return a response we expect from a configured endpoint
const express = require('express');
const app = express();

app.get("/PROVIDED-URL", (_req, res) => res.send("PROVIDED-BODY"));
  • Over HTML—attest the teams you allow wherever you load the recorder script
<html>
  <head>
	  <script type="text/javascript" src="https://js.jam.dev/recorder.js"></script>
    <meta name="jam:team" content="TEAM-ID-COPIED-FROM-SETTINGS" />
    <meta name="jam:team" content="ANOTHER-TEAM-ID-FROM-SETTINGS" />
    
    <!-- more head content -->
  </head>
  
  <body>
    <!-- body content -->
  </body>
</html>

Consider the above options with regards to three criteria we’d set out while spec’ing the work:

- Prefer copy/paste-able solutions vs. ones that require config
- Prefer solutions implementable by lower-authority personnel
- Prefer solutions that look familiar and obvious vs. strange and opaque

Managed Embeds w/ In-HTTP Domain Verification is just… cleaner. There’s only one line that requires config, and we don’t have to ask users to update their site code should we need different iframe attributes, for example. Additionally, neither of these require the implementer have permission to edit DNS settings. And everyone recognizes script embeds and meta tags!

As with above, the rest of the work was not easy—coordinating state across multiple frame boundaries is challenging!—but we thought it much more convenient for users and maintainable for us to own this complexity. I’m still impressed we managed to pack such a powerful product into three lines of HTML!

What's Next?

We have ambitious plans for Recording Links: new ways to create them, new ways to consume them, new ways to make them even easier to use. Today, we celebrate how far we’ve come; Recording Links has surpassed many of our expectations internally, and we hope it delivers you not just the value but the wows we’ve been experiencing.

Have you tried Recording Links yet? If not, what are you waiting for? It’s only 3 lines of HTML to get started; I’d order you a coffee while you work, but let’s be real—if you made it this far, by the time the coffee was ready you’d already be done.

]]>
<![CDATA[How Intercom Builds with AI, with CTO Darragh Curran]]>https://strawberryjam.ghost.io/how-intercom-builds-with-ai/68eeaab6cd9b950001a5f55dFri, 23 May 2025 20:02:00 GMT

Intercom is a customer support software used by orgs like Microsoft, Anthropic, Perplexity, Vanta, and Clay. Every month, they serve over 600M end-users! The Intercom team has spent a lot of time thinking deeply about AI’s impact on customer support. They were one of the first movers in the space when they launched Fin: their AI customer service agent, earlier this year. 

We spoke to Intercom’s CTO Darragh Curran to find out how his team actually uses AI. 

Here are the highlights!

AI as a product, not a plugin

Intercom has been shipping AI features for years, but the step change with modern LLMs unlocked a completely new set of possibilities. Instead of bolting AI onto workflows, they built the infrastructure to test, measure, and iterate - hundreds of A/B tests, all compounding over time.

That’s how Fin’s resolution rate grew from ~26% to over 50%, often improving a percentage point each month. The process goes beyond just swapping in new models. It focussed on treating AI like a product surface to conduct experiments. 

Measuring engineering velocity 

We asked Darragh if AI has made their engineering team faster. Darragh was candid here: he said coding feels easier, but bottlenecks have moved elsewhere. Specifically decision-making, wait times, review loops. So they measure what matters:

  • PR throughput per engineer.
  • Feedback wait times (tests, reviews, deploys). If someone waits six hours for a review while another waits 30 seconds, they investigate.
  • % of code changes co-authored by AI. Today, ~1–2% of changes are fully written by AI; a much larger share is AI-assisted. 

In short, AI can help increase velocity only if you know where the bottlenecks really are.

Intercom’s internal dev tools team 

Inside Intercom, there’s a team dedicated to making AI the default developer environment. They run autonomous jobs that remove dead code, clean up stale flags, and file PRs: tedious but important jobs no human wants to prioritize.

Over time, these automations evolve into repo-wide refactors and eventually bigger lifts like framework swaps. Think of it as an AI dev-tools startup inside the company, proving safety with narrow scopes before expanding to higher-stakes jobs.

“There’s valuable work AI can do that humans will never prioritize - even though it compounds.”

Creating a shared repository of AI engineering best-practices 

Some engineers intuitively use AI tools; others need a practical framework to upskill. Darragh expects the team average to rise a full point on their internal “impact scale” just by evenly distributing know-how, even if the tools stay static. 

They proactively standardize prompts, document repeatable wins, and review diffs as a team so these patterns spread.

“If you’re not using these tools, you’re probably underperforming - and you’ll get left behind.”

Choosing metrics that matter

Intercom rejects “feel-good” dashboards. They use behavior-shaping metrics:

  • Resolution rate (for agents like Fin), constrained by accuracy.
  • PR throughput, review latency, deploy cadence (for engineering).
  • AI contribution share (to set expectations and track adoption).

Set thresholds that trigger action (e.g., review waits > 2 hours = auto-escalation). Make them visible so teams self-correct.

He thinks teams should publish a weekly metrics memo that documents resolution rate changes, top latency offenders by team, percentage of AI-assisted changes, and experiments that moved the needle.

Accuracy over spectacle

Intercom’s biggest lesson with Fin was: correctness beats cleverness. Customers don’t care if an agent sounds smart. They just want to have their problem resolved quickly. Yes, the magical moments AI can create for the customer matter, but every UX win at Intercom is paired with a truth metric: hallucination rate, retrieval coverage, source accuracy.

When a change boosted deflection but hurt accuracy, they rolled it back. A customer’s trust is harder to earn than it is to lose.

Takeaways for builders

  • Build the engine, not the demo. An A/B infrastructure, guardrail metrics, and repo-wide automation paths matter more than the flashy first win. 
  • Fix the queues. If AI doesn’t move velocity, your bottleneck is in reviews, tests, deploys, or prioritization.
  • Automate the grunt-work. Start with dead code and flag cleanup; graduate to refactors. The compounding ROI is real.
  • Codify usage. Make AI-first environments default and share patterns until the average rises.

These principles have led to steady, compounding improvements in both customer trust and developer productivity.

We had a great time jamming with Darragh! We’ve been having similar conversations with product and engineering leaders at top engineering orgs (like Vercel, Wix, Vanta, etc) to unpack how they actually build with AI. You can find previous episodes on YouTube

]]>
<![CDATA[How Visor Ships Features 30% Faster with Jam]]>https://strawberryjam.ghost.io/how-visor-ships-new-features-faster-with-jam/67e203c1d14c690001cbfb54Tue, 25 Mar 2025 14:28:52 GMT

Visor powers critical business decisions at some of the world's largest and most demanding organizations. Used by teams at Amazon, Tesla, Nike, and digital agencies worldwide, Visor connects disparate data sources into intuitive dashboards.

That sounds like a complicated piece of software, but that’s the beauty of Visor. They make it easy — reimagining the future of work, they enable people to work in the comfort and safety of their spreadsheets, with all the data from their integrated applications. All without traditional B2B software restrictions around sharing, accounts, and complex UIs.

Recently, we spoke with Dmitriy Redkin, Product Manager at Visor, about how they’re using Jam to accelerate development cycles and launch new features faster, while maintaining the exceptional quality their enterprise clients expect.

The challenge

Eliminating development bottlenecks caused by traditional bug reporting

Before implementing Jam, Visor's bug reporting system posed significant inefficiencies across their organization:

"Before Jam, we were sending screenshots or using Loom for something that isn't built for reporting. I would have to take screenshots of where the bug occurred in the application, open up the console, take a picture of the right place in the console. It was slow and rarely specific enough for engineers."
— Dmitriy Redkin, Product Manager, Visor

The team needed to solve four key challenges:

  • Tickets were missing crucial technical context for engineers
  • Low visibility into whether reports were being addressed
  • Lengthy back-and-forth communications between product and engineering
  • Significant time spent documenting bugs rather than building features

The solution

One-click bug capture with complete developer context

With Jam, Visor implemented a streamlined bug reporting workflow:

  • Full session capture with network requests and console logs
  • Automatic notification when engineers review bug reports
  • One-click sharing via Jam’s Slack integration and fast link generation
  • Built-in accountability tracking
"Now with Jam, it’s super easy to send the relevant metadata, network and console logs to engineers. And we get notifications when Jams have been opened, so we know an engineer is starting to work on something. They're being held accountable and PMs are happy because we know that the things that we report are actually being solved."

The outcome

2+ hours saved per day & faster launches with company-wide quality focus

Since implementing Jam, Visor has significantly improved product velocity while maintaining their high quality bar.

"Before Jam, we were probably spending about 30 minutes or so between creating the report, and then engineering trying to reproduce it. With Jam, we're saving almost two hours whenever we're trying to improve the quality of our application."

These time savings translate directly to better product development:

"The two hours that I get back now that I use Jam is amazing because I get to focus on the parts of the app that work and figuring out how to do more of that as opposed to how it doesn't work and the painful parts of building software in the 21st century."

Jam has also enabled Visor to:

  • Involve the entire company in raising the quality bar even higher
  • Engage leadership earlier in the development process
  • Accelerate feature launches
  • Enhance their competitive advantage through superior product quality

Feature release at 2x speed: Launching dashboards & analytics with company bug bash

Visor recently launched a new dashboard and reporting capability – a critical feature for their enterprise customers. Jam played a crucial role throughout the entire development lifecycle:

"During the project, our Slack threads for the project channels were basically blowing up with Jams. Every message was a Jam, and every thread was using the details that were caught by the Jam."

The entire company, including founders, participated:

"Everybody was bug bashing in our company. So whenever we do all hands on deck, the designer, the engineers, and a marketing person usually comes in to make sure the feature works or to flag where it doesn't work so we can find out how to make it better."

The result was a faster, more successful launch:

"With Jam, we were definitely able to launch the feature a lot faster and sleep more soundly at night after the production launch. We knew we had battle tested the feature out in a more robust way. And the launch went really, really well."

Visor's competitive advantage: Unbeatable quality, now faster with Jam

For Visor, delivering exceptional quality is a critical competitive advantage in a crowded marketplace. It’s how their small, but mighty team stands out against industry giants.

"Quality is really important in a small startup because if you don't have quality, you already don't have enough features. And so then you're left with really nothing other than a spirit to win. Exceptional quality enables you to retain champions and increase retention overall."

Jam has become an essential tool in maintaining this advantage:

"What we differentiate ourselves by is the fact that we have really dedicated engineers building high quality features. That's not possible without the ability to quickly resolve issues that come up both in development and in production."

By streamlining bug reporting and making it accessible to everyone at Visor, Jam has helped create a culture where quality is everyone's responsibility:

"Jam lowered the bar for everybody on our team, including our founders to get involved in finding problems, bug bashing, and ensuring the quality of our application — by being always just one click away from recording really great videos with developer context for our team to consume."

For Visor, this translates to happier customers, stronger retention, and a more competitive product in the market – all while saving valuable engineering hours every single day.

]]>
<![CDATA[How companies using Jam are saving six figures annually]]>While we've always been delighted to hear from customers that Jam makes debugging faster and easier, we wanted to put concrete numbers behind Jam’s impact to make sure we’re building a product that really makes a meaningful difference for them. We worked closely with

]]>
https://strawberryjam.ghost.io/how-companies-using-jam-are-saving-six-figures-annually/67b4d81eb428f5000144f7ebTue, 18 Feb 2025 18:58:05 GMTWhile we've always been delighted to hear from customers that Jam makes debugging faster and easier, we wanted to put concrete numbers behind Jam’s impact to make sure we’re building a product that really makes a meaningful difference for them. We worked closely with our customers to investigate their engineering velocity data before and after deploying Jam, and we’ll share with you the results today.

We worked with customers to measure how quickly high priority tickets were fixed if a Jam was attached vs one where no Jam was attached. They found tickets with a Jam attached were fixed 12 hours faster. This makes sense–– without a Jam, engineers have to manually investigate issues from scratch and wait for back-and-forth communications to gather necessary debug info.

One company found that after deploying Jam, bugs were fixed in a single day instead of 4-5 days. which led to a reduction of 88% bugs reported. In addition, they required 84% fewer troubleshooting calls, instead relying on a Jam – saving both their customers and their engineers from more meetings in the day.

But what does this mean in dollars and cents? First, we’ve found that each Jam saves a minimum of 35 minutes of engineering investigation time. This is actually quite conservative based on our data, but we wanted to err on the side of caution. We then assumed each developer handles roughly three bugs per week.

Taking a mid-level engineer's salary in the US ($175k) and factoring in benefits and taxes ($225k total), we arrived at about $1.80 per minute of engineering time. This means each Jam saves approximately $63 in engineering time (35 minutes × $1.80).

For a single engineer over a year, this adds up to $9,072 in savings, only accounting for direct engineering time saved. We’ve also found it saves companies around $3,000 per PM based on how quickly they can triage & communicate these issues. We haven't calculated in the revenue impact of reduced customer churn from having fewer longstanding issues, or the revenue impact of being able to deliver more new features, though certainly operational efficiency and product quality directly correlate to customer revenue beyond time saved.

Just purely in engineering costs, Jam saves companies $9,000+ per engineer per year. Let's say you have a workforce of 50 engineers, that's a total saving of deploying Jam of $453,600. You can use this ROI calculator to measure the savings you can reliably expect by deploying Jam in your own organization.

If you’re interested in learning more about how other enterprises are using Jam wall to wall, saving their teams $1M for every 110 engineers on their team, you can book a demo.

Thanks for being a part of Jam. We’re really excited about making it a lot faster to develop software because that’s how the future can arrive sooner for everyone. Appreciate you being on the journey with us to make engineering a lot more productive, one Jam at a time.

]]>
<![CDATA[How We Built Video Annotations w/ tldraw]]>https://strawberryjam.ghost.io/how-we-built-video-annotations-w-tldraw/678ad7c6a324cf00018e4f7bFri, 17 Jan 2025 22:28:02 GMT

We built video annotations with tldraw! It’s a new feature we’re launching next week, and we’re really excited for all of you Jamming to try it. So today, Jam engineers: Max, Aidan, and Rui get into the technical details of implementing the tldraw library - so you can draw stuff while recording your screen.

Excited to show you what we built!

0:37 Why implementing annotations was so different than the blur tool
2:55 How Max discovered we already had a tldraw license
4:25 Why we love tldraw: React-SVG dual architecture & more details
8:55 Demo of video annotations & why it’s different than Jam’s screenshot feature
11:29 Why we ultimately decided to use tldraw for video too (it looks so nice!)
12:52 Our biggest takeaway for building w/ 3rd party libraries

Episode links:
- Try tldraw
- Check out the episode about Jam's new blur tool

]]>
<![CDATA[Getting Ready to Launch New Blur Tool! (Figma Tour)]]>https://strawberryjam.ghost.io/getting-ready-to-launch-new-blur-tool-figma-tour/678998fda324cf00018e4f63Fri, 10 Jan 2025 23:45:00 GMT

We’re getting ready to release Jam’s newest feature, the blur tool! We can’t wait for you all to try it, so this week on Building Jam we’re showing you everything. Figma walkthrough, staging demo, and all the unexpected twists and turns of blurring what’s in your browser — as you’re recording!

1:08 A lil blur tool demo
2:26 Figuring out an extra setting w/o cluttering our extension
5:27 Selecting, clicking + more design details in Figma
6:35 Why not make the selector a strawberry?
8:30 Why we decided against per-team access (more free blur)
13:14 The biggest eng challenge? Everything but the core feature!

Subscribe to Building Jam on YouTubeSpotify, and Apple Podcasts.

New episodes drop every Friday at 10AM ET. See you there!

]]>
<![CDATA[9 lessons we learned building Jam in 2024]]>https://strawberryjam.ghost.io/9-startup-lessons-we-learned-building-jam-in-2024/677c1af9f14ef80001ebaebeMon, 06 Jan 2025 18:11:38 GMT

Thank you for being a part of Jam this year! We crossed 6 million Jams, 170k users, shipped 31 new features and an entirely new Jam for your helpdesk, saved engineers 50+ years of debugging time (!) and got to meet 2,000 of you at a Jam meetup.

Everyone who Jams is a builder trying to make some corner of the world better through software. Y'all are awesome.

So, we wanted to end the year by sharing with you — builder to builder — 9 lessons we learned at Jam this year. And if you feel inspired, I'd love for you to let me know a lesson you learned building your product.

1. Small teams, high ownership
This year we grew from 11 to 20 people, and split from one team into specialized pods. And… small teams with true ownership run faster and do more than I ever thought possible.

2. Quality requires engineering discipline
It's a practice you have to keep. You don't just work on performance "this quarter". It's a mindset and a practice you stick to every day.

3. Getting over our process allergy
When you're small, it's important to be allergic to process. But as you grow, a bit of process is needed so you can do more things than anyone can keep in their head. This year we added just a few templates, checklists and weekly reviews. At first I was resistant, but now can't imagine how else we could manage so many parallel projects going on at any given time.

4. Treat everything as user experience
Even if it's an email, it's still a user experience. It takes longer to craft everything when you think like that. But, you end up shipping things that you're excited for users to experience! And that's super fun.

5. Great dev teams do the boring things well
Investing in the boring things unlocks you to move on to new problems. Docs, testing, deployment, infra. It's really boring to upgrade your servers, but all that investment means we get to go even faster now ("and sleep better" says the currently on-call engineer).

6. 1 + 1 designers = 3 designers
I used to worry that adding a second designer would slow us down, because instead of one person having all the context, they would need to coordinate. But actually, when we grew the design team, we were able to do even more because they had thought partners to think through our hardest design challenges. That was really cool to see.

7. The simple version is better
When in doubt, ship the simplest thing. The live Jam product today started as an internal project called "Simple Jam". The live Jam pricing started as an internal project called "Simple pricing". The simplest version is simply better.

8. Hire people who awe you
Hire people who love what they do. Hire people who care so darn much, you can't help but smile when they share. The joy of building things happens in moments, day to day, week to week. The people you get to build with are everything.

9. Time moves really fast in a startup
A lot of things change in a year. Things grow up quickly. Cherish the moments :)

Now, onto 2025! Excited to continue the Jam journey with all of you. Thanks for being a part of it. Happy new year!

💜, the Jam team:

9 lessons we learned building Jam in 2024

]]>
<![CDATA[Jamboree: 48hrs to build whatever we wanted!]]>https://strawberryjam.ghost.io/48hrs-to-build-whatever-we-wanted-hackathon/67899616a324cf00018e4f41Fri, 13 Dec 2024 23:36:00 GMT

We just had our first company hackathon! It was so much fun to get together & build stuff IRL! From AI agents to help you fix bugs & test fixes to silly buttons that break websites, we’re excited to show you what it was like.

We hope you enjoy.

Subscribe to Building Jam on YouTubeSpotify, and Apple Podcasts.

New episodes drop every Friday at 10AM ET. See you there!

]]>