Create and manage smart trackers
  • 25 Mar 2025
  • 13 minute read
  • Contributors
  • Dark
    Light
  • PDF

Create and manage smart trackers

  • Dark
    Light
  • PDF

Article summary

Business admin

Gong Foundation*

Company settings > Trackers

Smart trackers are AI models that enable you to find when concepts are mentioned in your conversations, even when they are said in diverse ways. Smart trackers take into account how people naturally speak . For example, if you track the concept “Asking for a discount” smart trackers can find when customers say things like “Can we get a better price?” “What’s your best deal?” “Can you go any lower?”

Smart trackers help your team:

  • Uncover hidden patterns: Recognize concepts regardless of wording, identifying customer needs even when expressed in unexpected ways.

  • Identify concepts: Track complete ideas rather than exact terms, providing more meaningful conversation analysis.

  • Use the results to improve business outcomes: Trackers can help you understand what customers are talking about, how reps are promoting new initiatives, how specific issues are perceived by customers, and more.  

What you need to get started

To train a custom smart tracker, you need at least 500 recorded English calls; best results are achieved with at least 1500 calls. If you have fewer than 1500 calls, choose concepts that are fairly common, since this enables us to build a model with many examples of sentences that positively match your concept. If you have fewer than 500 English calls, or if the concept you want to track isn’t that common, you may want to build a keyword tracker, which finds words and phrases rather than concepts. For more, see Create keyword trackers.

There’s no minimum number of calls needed for using pre-trained smart trackers, which can be activated right away from your Trackers page.

Create a custom smart tracker

Where to go: Company settings > Customize analysis > Trackers.

  1. In the top right corner, click + Create tracker, and select Smart tracker.

  2. Give the tracker a name (you can change this later) and click Create.

    Popup window for creating a new tracker with a name input field.

  3. Settings: Add a description to help people understand what concept is being tracked.
    In the Mentioned by field, choose which side of the conversation you want to track: your company, the customer, or any party.

    Settings page for budget constraints tracker, highlighting customer mention options.

  4. Tracker filters: Set filters to focus the tracker on specific types of calls so that it produces accurate results. Once activated, the tracker is applied to calls that match these filters too. Choose whether to track the concept in calls only, or in calls and emails. If you choose calls and emails, note that trackers are trained on calls only, and that trackers in emails are only available in the deal activity panel and account page.

  5. By default, the tracker is set to track external calls only. To change this and to add other filters, click Change. This opens the filter editor, where you can choose the filters that you want applied to this tracker. For example, if you only want to track calls that are in a particular deal stage, or calls handled by a specific team, set these as filters. Click Save.

  6. To choose a specific part of the call you want to track (for example, the first 5 minutes; the last 5 minutes) click Choose which part of the call. By default, trackers are set to search anytime in the call.    

  7. Build model > Add sentences: Give at least 5 examples of real sentences from real calls that fit this concept. We'll use these sentences to pull sentences from your calls that match (or don't match) the concept. Click Bulk add sentences to add up to 100 sentences. When you’ve added as many sentences as you like, click Continue. Interface for adding example sentences in a model training application.

  8. Build model > Train the model: Tag sentences to train the model to identify which sentences match your concept and which ones don’t. Basic training requires 4 rounds of tagging. Each round contains 25 sentences. After 4 rounds of tagging, review the tracker to see if it’s producing accurate results.

  9. Click Performance overview to see what types of results the tracker would get if it was activated. Learn more about assessing smart tracker performance.

  10. Results look good? Activate the tracker. Set up a stream when you activate it to automatically collect relevant calls and be notified about them. Want more accurate results? Continue training the tracker by completing more rounds of tagging.

Train after activation

After a smart tracker is activated, you can continue training to improve results.

Where to go: Company settings > Customize analysis > Trackers

  1. Locate the tracker you want to continue training and click Train more. User interface showing options to train, deactivate, or delete conversation insights.

  2. Tag more rounds of sentences.

  3. When you’re satisfied with the results, click Publish changes to make them live.  

Edit after activation

After a smart tracker is activated, edit its filters to apply it to different calls.

Where to go: Company settings > Customize analysis > Trackers

  1. Locate the tracker you want to edit and click Edit.

  2. Edit the relevant fields and click Publish changes to make them live.

Note

Once you’ve made changes to an active tracker, you’ll see an unpublished changes message at the top of the screen. Click Publish changes to make the changes live.

View tracker results

Once your smart tracker is set up, view results in the following places:

  • Search page

  • Team stats

  • Streams

  • Initiative boards

  • Calls API

  • Calls CSV

  • Playbook

  • Deal activity panel

  • And more

Deep dive

Set up the tracker

Name: This name is displayed everywhere the tracker appears, so choose a name that is short and meaningful. For example, “Pricing objections” is better than “Pricing” or “Customers who say that the price is too high".

Description: Describe the concept behind the tracker so that anyone who sees the tracker understands what it's tracking. This description appears when someone hovers the tracker name.

Mentioned by: Decide whether you want the tracker to be applied to what people at your company say, what the customer says, or anyone.

Want to track how your reps are promoting new initiatives? Select Your company

Want to track what customers are saying about a new product or specific pain points? Select Customers

Give example sentences

Example sentences are what the AI model uses to ‘understand’ what you’re looking for. Provide at least 5 examples of real sentences from real calls that you would want the tracker to find.

Interface for adding example sentences in a model training application.

Tips for writing great example sentences

  • Keep the sentences short and precise.

    • “Tell me a bit more about the main challenges you’re facing right now.

  • Choose sentences that have different words.

    • "This is John on a recorded line."

    • "My name is Claire and I’m calling from a monitored line."

  • Use a variety of sentence types.

    • "That’s just not in our price range."

    • "Could you go any lower on that cost?"

  • Look for sentences that are specific, not general.

    • "Our main priority is driving higher conversion rates through the sales funnel."

  • Make sure each example is a single sentence only.

  • When copying from transcripts, copy full sentences and edit as necessary.

Tag sentences to train the model

Using the example sentences you provide, we create sets of sentences from your existing calls. Some of these sentences are similar to your sentences; some aren’t. Tag the sentences so the model learns which ones fit your concept and which ones don’t. The sentences you’ll be tagging are in bold. The sentences that are not in bold, before and after the bolded sentences, give you context.

  • Tag sentences YES if they fit your concept.

  • Tag sentences NO if they don't fit your concept.

  • Tag sentences NOT SURE if you’re not sure.

Discussion about a deal board and customer interactions in a sales context.

To hear a snippet of the call, click Go to call in the bottom right.

Each round of tagging includes 25 sentences and takes about 10 minutes: This includes the amount of time it takes you to tag the sentences, and the processing time for us to train the model.

Important:

During the first 4 rounds of training, you're building the model, so expect to tag many of the calls NO. This is expected, and part of the training process, because it teaches the model what types of sentences to avoid.

Review the model and evaluate it

After you've completed 4 rounds of tagging, go to Performance overview to review and evaluate the results.

Performance estimation showing precision and hit rate metrics for a tracker model.

The results you see on this page are an estimation of the tracker performance, and the types of sentences the tracker will surface if you activate it right now.

At the top, you’ll see a performance estimation. Below this are 20 sentences that the model has surfaced in your calls. Unlike previous screens, these aren’t sentences that need to be tagged, and the model isn’t deliberately giving you samples of sentences that don’t match your concept. These are samples of the types of results you’ll get when you activate the tracker.

If a majority of the results fit your concept, activate the tracker. If you’re not satisfied with the results, keep tagging sentences to improve the accuracy.

What should you do if you want better tracker results?

You can continue training the tracker with more rounds of tagging, and by checking the results after every round. During rounds 5 to 10, the results should improve with each round.

What should you do if the tracker results are way off?

If the results aren’t satisfactory, and they don’t get better through rounds 5 to 10, consider creating a new tracker and using different sentences to represent the concept.

Activate the tracker

When you’re satisfied with the model’s results, activate the tracker. You’ll be asked to choose if you want to apply the tracker to upcoming calls only, or calls that happened in the past (up to 12 months).

When you activate the tracker, you can automatically create a stream based on tracker mentions.

If you apply the tracker to calls that happened in the past, it can take up to 24 hours for the results to be processed. You’ll get an email notifying you when the results are ready. Until then, you can view partial results on the Search page.

Note:

Take the Academy’s Smart tracker course to learn how to create accurate, insightful smart trackers for your team.

For a winning recipe for tracking team initiative adoption with smart trackers, see: Tracking the performance and adoption of strategic initiatives

Assessing smart tracker performance

Wondering how accurate your smart tracker is? Go to Performance overview to see an estimation of its performance based on how the smart tracker performs on sample snippets that were already tagged when someone in your company set up the tracker by tagging snippets.

It’s not an exact measure of how the tracker will perform in real life, but it does provide a fair and straightforward approximation.

PerformanceEstimation.png

Precision & hit rate (recall)

What is precision?

Precision refers to the percentage of smart tracker detections that are correct. For example, when precision is 80% and the smart tracker detects 10 snippets, that means 8 of these snippets are correct detections and 2 of them are false.

Another way of describing precision is by assessing the number of true positives and false positives. True positives are detections that are correct. False positives are detections that are incorrect. When precision is 80%, it means that there were 8 true positives and 2 false positives.

precision.png

When does high precision matter?

High precision matters when:

  • You want to reduce false leads

  • You want to make optimal use of your resources

What is hit rate (recall)?

Hit rate, also known as recall, refers to the percentage of correct snippets that the smart tracker detects out of all the correct snippets that exist. For example, when the hit rate is 70%, it means that the smart tracker detected 7 correct snippets for every 10 correct snippets that actually exist. It missed 3 correct snippets.

Another way of describing hit rate (recall) is by assessing the number of true positives and false negatives. True positives are detections that are correct. False negatives are detections that were missed. When the hit rate (recall) is 70%, it means that there were 7 true positives and 3 false negatives.

Recall.png

When does a high hit rate (recall) matter?

A high hit rate (recall) matters more when:

  • You want to detect as many relevant snippets as possible

  • You care less about seeing snippets that aren’t relevant than you care about missing snippets that are relevant

Estimating missed snippets

This estimation is based on test sets of snippets that were tagged when the smart tracker was set up. We already know which snippets match the concept and which ones don’t, so when we run the smart tracker on them, we cross reference the results with what’s already been tagged.

How we calculate these metrics

We calculate these metrics by running the model on snippets that have already been tagged. When the model was trained, at least 100 snippets (4 rounds) were tagged Yes, No, or Not sure. We set aside 10% of these snippets for validation (known as a held-out set) and use the other 90% to train the model.

After training the model, we test it on the held-out set to see how many of the Yes snippets it detects correctly, how many Yes snippets it doesn’t detect, and how many No snippets it detects incorrectly.

We do this once, remix the snippets, set aside a different held-out set, and test the model again. We repeat this process 10 times to get the performance estimation.

Performance estimation FAQs

What is the estimation based on?

The estimation is based on snippets you’ve already tagged. We run the smart tracker on those snippets and base our results accordingly.

When calculating hit rate (recall), how do you know how many snippets the tracker missed?

This estimation is based on test sets of snippets that were tagged when the tracker was set up. We already know which snippets match the concept and which ones don’t, so when we run the tracker on them, we cross reference the results with what’s already been tagged.

Will the precision estimate improve with each additional round of training?

Not necessarily. Each tagging round adds new data to the training set. The model may not yet have learnt the new examples; for example, if they were novel. So, since at every round all tagged data is used to compute the performance metrics, the performance may effectively decrease. Note that the test set after every round is different, so the numbers can't be compared to each other.

The performance estimation went down between one round and the next. How come?

We measure performance using a method called cross-validation. This means that we train the model on parts of your data, and test to see that it correctly identifies the other parts of it based on what you’ve tagged. We do this to align with a real-life situation, in which the model scans conversations it hasn’t seen yet. After the last round, new training data was added, which improves the model performance, but also presents the model with the challenge of identifying new data. It’s normal, therefore, that the numbers go down. As you tag more examples, the model stabilizes and gives the best possible performance.

Smart trackers for playbooks

We’ve built pre-trained smart trackers to support our out-of-the-box playbooks. If you decide to build your own smart trackers for your playbook, you’ll need to take a slightly different approach than you take for other smart trackers. Smart trackers that track initiative adoption or rep performance, for example, tend to focus on the company side of the call. When building smart trackers for playbooks, you probably want to focus on the customer side of the call. Follow these guidelines to build strong, accurate smart trackers for your playbooks.

  1. Train the tracker on the customer side of the call.
    In most cases, you’ll want to train the tracker on what the customer is saying, because that’s how elements of the playbook are filled in.

  2. Give sample sentences that reflect what the customer says (not the rep).

    For example, if you’re building a tracker to identify pain, you’ll want the sample sentences to express pain, not ask about it. For example:

    • We are experiencing slower than expected growth and changing climate in the space with huge players that are eating our lunch.

    • We have a challenge to enable and train our sales people on a very technical product.

    • The biggest challenge is there's three people involved so it's changing hands, and a lot gets lost in that hand change.

*The features available to you depend on your company’s plan and your assigned seats.


Was this article helpful?

Changing your password will log you out immediately. Use the new password to log back in.
First name must have atleast 2 characters. Numbers and special characters are not allowed.
Last name must have atleast 1 characters. Numbers and special characters are not allowed.
Enter a valid email
Enter a valid password
Your profile has been successfully updated.
ESC

AI, a genAI helper, will scrub our help center to give you an answer that summarizes our content. Ask a question in plain language and let me do the rest.