Create custom smart trackers
  • 13 minute read
  • Contributors

Create custom smart trackers

Article summary

Smart trackers use AI to identify and surface specific concepts by taking into account diverse words, sentence structures, and contexts.

To train a custom smart tracker, you need at least 500 recorded English calls; best results are achieved with at least 1500 calls. If you have fewer than 1500 calls, choose concepts that are fairly common, since this enables us to build a model with many examples of sentences that positively match your concept. This is important for building a strong smart tracker. There’s no minimum number of calls needed for using out-of-the-box smart trackers, which can be activated right away from your Trackers page.  


Your org or workspace doesn’t have 500 English calls yet? Or maybe it does, but the concept you want to track isn’t that common. In that case, you may want to build a keyword tracker, which surfaces words and phrases in your calls. For more, see Create keyword trackers.

Basic steps

Click Company settings > Customize analysis > Trackers. In the top right corner, click + Create tracker, and then choose Smart Trackers.

  1. Click + CREATE TRACKER.

  2. Give the tracker a name and description that reflects the concept you want to track. These should be meaningful enough so that people at your company who see the tracker understand what it’s tracking.

  3. Use filters to define how the tracker is trained. These filters focus the tracker so it produces more accurate results. Once activated, the tracker adheres to these same filters.

    1. Track when mentioned by: Set whether you want the tracker to train on what people at your company say, what customers say, or what anyone says.

    2. Track in: Calls only, or calls and emails. (This capability is rolling out in July 2024; results will be visible in deal boards, to start.)

    3. Web conference or telephony calls

    4. Internal or external calls

    5. Click more filters to choose addition filters, such as deal stage and team.

  4. Give at least 5 examples of real sentences from real calls that fit this concept. We'll use these sentences to pull sentences from your calls that match (or don't match) the concept.

  5. Tag sentences to train the model to identify which sentences match your concept, and which ones don’t. Basic training for the model requires 4 rounds of tagging. Each round contains 25 sentences.

  6. After 4 rounds of tagging, review the tracker to see if it’s producing accurate results.

  7. Results look good? Activate the tracker. You can set up a stream when you activate it to automatically collect relevant calls in a folder. Want more accurate results? Keep training the tracker by completing more rounds of tagging.

  8. After activation, you can continue training the tracker to improve accuracy. Go to Company settings > Trackers, located the tracker you want to keep training, click the action menu beside its name and click Train more.

Deep dive

1. Set up the tracker

Tracker name: This name is displayed everywhere the tracker appears, so choose a name that is short and meaningful. For example, “Pricing objections” is better than “Pricing” or “Customers who say that the price is too high".

Description: Describe the concept behind the tracker so that anyone who see the tracker understands what it's tracking. This description appears when someone hovers the tracker name.

Tracker filters: Filters narrow down the types of calls that the tracker will be applied to, so you get accurate, meaningful results. Mandatory filters include the following:

  • Track when mentioned by: Decide whether you want the tracker to be applied to what people at your company say, what the customer says, or anyone.

    • Want to track how your reps are promoting new initiatives? Select People at (your company)

    • Want to track what customers are saying to see whether reps are following your sales methodology? Select People not at (your company)

    Track in: Decide whether you want to track the concept in calls only, or in both calls and emails. (This capability is rolling out in July 2024; results will be visible in deal boards, to start.)

  • Web conference or telephony calls: Set whether the tracker is applied to outbound calls, inbound calls, conference calls or all three.

  • Internal or external: Set whether the tracker is applied to calls that took place between team members only, or calls that took place with customers.

If you want to further specify which calls the tracker is applied to, click More filters and you’ll see the following optional filters:

  • When it was said

  • During which opportunity stage

  • By which team

For example, let’s say you want to track how your customer support team is introducing themselves on calls from customers. You can choose to apply the tracker to:

Inbound calls: From customers to the support center

External calls: Involving team members and customers

Said during: The first 5 minutes of the call, in the discovery stage, by the Customer Support team.

Once you’ve set these filters, click NEXT and you’ll move to the page where you provide example sentences.


Once you set filters for which types of calls you’ll train the tracker on, these filters will be applied going forward and cannot be changed. So, if you train the tracker on inbound calls only, you’ll only be able to use the tracker on inbound calls. If you train the tracker on Jane Smith’s team only, you’ll only be able to use the tracker on Jane Smith’s calls.

2. Give example sentences

Once you’ve prepared the tracker settings, you’re ready to start building the AI model. The first step is giving example sentences.

These sentences are what the AI model uses to ‘understand’ what you are looking for. Provide at least 5 examples of real sentences from real calls that you want the tracker to surface.

Quick tips for writing great example sentences

  • Keep the sentences short and precise.

    • “Tell me a bit more about the main challenges you’re facing right now.

  • Choose sentences that have different words.

    • "This is John on a recorded line."

    • "My name is Claire and I’m calling from a monitored line."

  • Use a variety of sentence types.

    • "That’s just not in our price range."

    • "Could you go any lower on that cost?"

  • Look for sentences that are specific, not general.

    • "Our main priority is driving higher conversion rates through the sales funnel."

  • Make sure each example is a single sentence only.

  • When copying from transcripts, copy full sentences and edit as necessary.

3. Tag sentences to train the model

Using the example sentences you provide, we create a set of sentences from your existing calls. Some of these sentences are similar to your sentences; some aren’t. Tag the sentences so the model learns which ones fit your concept and which ones don’t.

  • Tag sentences YES if they fit your concept.

  • Tag sentences NO if they don't fit your concept.

  • Tag sentences NOT SURE if you’re not sure.


The sentences you’ll be tagging are in bold. The sentences that are not in bold, before and after the bolded sentences, give you context.

Want to hear a snippet of the call to be sure of the context? Go ahead and click Go to call in the bottom right.

Each round of tagging includes 25 sentences and takes about 10 minutes: This includes the amount of time it takes you to tag the sentences, and the processing time for us to train the model.


During the first 4 rounds of training, you're building the model, so expect to tag many calls of the NO. This is fine, and expected, because that’s part of the training process. The model can’t learn with only YES tags, because then it doesn’t learn which types of sentences to avoid.

4. Review the model and evaluate it

After you've completed 4 rounds of tagging, you’ve trained the model enough to review and evaluate its results.

To do this, go to the Review model page.

The results you see on this page are an estimation of the tracker performance, and the types of sentences the tracker will surface if you activate it right now.

At the top, you’ll see a performance estimation. Below this are 20 sentences that the model has surfaced in your calls. Unlike previous screens, these aren’t sentences that need to be tagged, and the model isn’t deliberately giving you samples of sentences that don’t match your concept. These are samples of the types of results you’ll get when you activate the tracker.

If a majority of the results fit your concept, activate the tracker. If you’re not satisfied with the results, keep tagging sentences to improve the trackers accuracy.

What should you do if you want better tracker results?

You can continue training the tracker with more rounds of tagging, and by checking the results after every round. During rounds 5 to 10, the results should improve with each round.

What should you do if the tracker results are way off?

If the results aren’t satisfactory, and they don’t get better through rounds 5 to 10, consider creating a new tracker and using different sentences to represent the concept.

5. Activate the tracker

When you’re satisfied with the model’s results, activate the tracker. You’ll be asked to choose if you want to apply the tracker to upcoming calls only, or calls that happened in the past (up to 12 months).

When you activate the tracker, you can automatically create a stream based on tracker mentions. To learn more about streams, see Create and manage streams.

If you apply the tracker to calls that happened in the past, it will take up to 24 hours for the results to be processed. You’ll get an email notifying you when the results are ready. Until then, you can view partial results on the Search page.

6. Train after activation

You can train smart trackers after activation to improve accuracy. To do so, go Company settings > Trackers, locate the active tracker, click Button____1_.png and select Train more.

7. View tracker results

Once your smart tracker is set up, you can view results in the following places:

  • Search page

  • Team stats

  • Streams

  • Saved alert emails

  • Initiative boards

  • Calls API

  • Calls CSV

  • Playbook (includes email, rolling out in July 2024)

  • Deal activity panel (includes email, rolling out in July 2024)

  • And more


Dive deeper with the Smart tracker course at the Academy, and learn how to create accurate, insightful smart trackers for your team.

For a winning recipe for tracking team initiative adoption with smart trackers, see: Tracking the performance and adoption of strategic initiatives

Assessing smart tracker performance

Wondering how accurate your smart tracker is? Go to Review model in any smart tracker that has at least 4 rounds of training and you’ll see an estimation of its performance.

This estimation is based on how the smart tracker performed on sample snippets that were already tagged when someone in your company set up the smart tracker by tagging snippets Yes, No, and Not sure.
It’s not an exact measure of how the tracker will perform in real life, but it enables you to get a fair and straightforward approximation.


Precision & hit rate (recall)

What is precision?

Precision refers to the percentage of smart tracker detections that are correct. For example, when precision is 80% and the smart tracker detects 10 snippets, that means 8 of these snippets are correct detections and 2 of them are false.

Another way of describing precision is by assessing the number of true positives and false positives. True positives are detections that are correct. False positives are detections that are incorrect. When precision is 80%, it means that there were 8 true positives and 2 false positives.


When does high precision matter?

High precision matters when:

  • You want to reduce false leads

  • You want to make optimal use of your resources

What is hit rate (recall)?

Hit rate, also known as recall, refers to the percentage of correct snippets that the smart tracker detects out of all the correct snippets that exist. For example, when the hit rate is 70%, it means that the smart tracker detected 7 correct snippets for every 10 correct snippets that actually exist. It missed 3 correct snippets.

Another way of describing hit rate (recall) is by assessing the number of true positives and false negatives. True positives are detections that are correct. False negatives are detections that were missed. When the hit rate (recall) is 70%, it means that there were 7 true positives and 3 false negatives.


When does a high hit rate (recall) matter?

A high hit rate (recall) matters more when:

  • You want to detect as many relevant snippets as possible

  • You care less about seeing snippets that aren’t relevant than you care about missing snippets that are relevant

Estimating missed snippets

This estimation is based on test sets of snippets that were tagged when the smart tracker was set up. We already know which snippets match the concept and which ones don’t, so when we run the smart tracker on them, we cross reference the results with what’s already been tagged.

How we calculate these metrics

We calculate these metrics by running the model on snippets that have already been tagged. When the model was trained, at least 100 snippets (4 rounds) were tagged Yes, No, or Not sure. We set aside 10% of these snippets for validation (known as a held-out set) and used the other 90% to train the model.

After training the model, we test it on the held-out set to see how many of the Yes snippets it detects correctly, how many Yes snippets it doesn’t detect, and how many No snippets it detects incorrectly.

We do this once, remix the snippets, set aside a different held-out set, and test the model again. We repeat this process 10 times to get the performance estimation.

Performance estimation FAQs

What is the estimation based on?

The estimation is based on snippets you’ve already tagged. We run the smart tracker on those snippets and base our results accordingly.

When calculating hit rate (recall), how do you know how many snippets the tracker missed?

This estimation is based on test sets of snippets that were tagged when the tracker was set up. We already know which snippets match the concept and which ones don’t, so when we run the tracker on them, we cross reference the results with what’s already been tagged.

Will the precision estimate improve with each additional round of training?

Not necessarily. Each tagging round adds new data to the training set. The model may not yet have learnt the new examples; for example, if they were novel. So, since at every round all tagged data is used to compute the performance metrics, the performance may effectively decrease. Note that the test set after every round is different, so the numbers can't be compared to each other.

The performance estimation went down between one round and the next. How come?

We measure performance using a method called cross-validation. This means that we train the model on parts of your data, and test to see that it correctly identifies the other parts of it based on what you’ve tagged. We do this to align with a real-life situation, in which the model scans conversations it hasn’t seen yet. After the last round, new training data was added, which improves the model performance, but also presents the model with the challenge of identifying new data. It’s normal, therefore, that the numbers go down. As you tag more examples, the model stabilizes and gives the best possible performance.

Smart trackers for playbooks

Note: Playbooks are being rolled out in July 2024. For more about playbooks, see Create and manage playbooks.

We’ve built out-of-the-box smart trackers to support out-of-the-box playbooks. If you decide to build your own smart trackers for your playbook, you’ll need to take a slightly different approach than you take for other smart trackers. Smart trackers that track initiative adoption or rep performance, for example, tend to focus on the company side of the call. When building smart trackers for playbooks, you probably want to focus on the customer side of the call. Follow these guidelines to build strong, accurate smart trackers for your playbooks.

  1. Train the tracker on the customer side of the call.
    In most cases, you’ll want to train the tracker on what the customer is saying in a call, because that’s how the elements of the playbook are filled in.

  2. Give sample sentences that reflect what the customer says (not the rep).

    For example, if you’re building a tracker to identify pain, you’ll want the sample sentences to express pain, not ask about it. For example:

    • We are experiencing slower than expected growth and changing climate in the space with huge players that are eating our lunch.

    • We have a challenge to enable and train our sales people on a very technical product.

    • The biggest challenge is there's three people involved so it's changing hands, and a lot gets lost in that hand change.

Was this article helpful?


Eddy AI, a genAI helper, will scrub our help center to give you an answer that summarizes our content. Ask a question in plain language and let me do the rest.