Business admin
Gong Foundation*
Note:
This article applies only to the example-based smart tracker builder. The newer question-based smart tracker builder, rolling out in September 2025, is the default option—it’s easier to use, works across multiple languages, and doesn’t require a minimum call volume.
Smart trackers are AI models that enable you to find when concepts are mentioned in your conversations, even when they are said in diverse ways. Smart trackers take into account how people naturally speak . For example, if you track the concept “Asking for a discount” smart trackers can find when customers say things like “Can we get a better price?” “What’s your best deal?” “Can you go any lower?”
Smart trackers help your team:
Uncover hidden patterns: Recognize concepts regardless of wording, identifying customer needs even when expressed in unexpected ways.
Identify concepts: Track complete ideas rather than exact terms, providing more meaningful conversation analysis.
Use the results to improve business outcomes: Trackers can help you understand what customers are talking about, how reps are promoting new initiatives, how specific issues are perceived by customers, and more.
What you need to get started
To train a custom example-based smart tracker, you need at least 500 recorded calls in English. For best results, aim for at least 1500 calls. If you have fewer than 1500 calls, choose concepts that are fairly common, since this enables us to build a model with many examples of sentences that positively match your concept. If you have fewer than 500 English calls, or if the concept you want to track isn’t that common, you may want to build a keyword tracker, which finds words and phrases rather than concepts. Learn more about creating keyword trackers.
There’s no minimum number of calls needed for using pre-trained smart trackers.
Create a custom example-based smart tracker
Admin Center > Agent Studio
Hover over AI Tracker and click Settings. In the Smart trackers page, click + Create tracker, and select Smart tracker.
Click Example-based builder found at the bottom of the page.
Give the tracker a name (you can change this later) and click Create.
Settings: Add a description to help people understand what concept is being tracked.
In the Mentioned by field, choose which side of the conversation you want to track: your company, the customer, or any party.Tracker filters: Set filters to focus the tracker on specific types of calls so that it produces accurate results. Once activated, the tracker is applied to calls that match these filters too. Choose whether to track the concept in calls only, or in calls and emails. If you choose calls and emails, note that trackers are trained on calls only, and that trackers in emails are only available in the deal activity panel and account page.
By default, the tracker is set to track external calls only. To change this and to add other filters, click Change. This opens the filter editor, where you can choose the filters that you want applied to this tracker. For example, if you only want to track calls that are in a particular deal stage, or calls handled by a specific team, set these as filters. Click Save.
To choose a specific part of the call you want to track (for example, the first 5 minutes; the last 5 minutes) click Choose which part of the call. By default, trackers are set to search anytime in the call.
Build model > Add sentences: Give at least 5 examples of real sentences from real calls that fit this concept. We'll use these sentences to pull sentences from your calls that match (or don't match) the concept. Click Bulk add sentences to add up to 100 sentences. When you’ve added as many sentences as you like, click Continue.
Build model > Train the model: Tag sentences to train the model to identify which sentences match your concept and which ones don’t. Basic training requires 4 rounds of tagging. Each round contains 25 sentences. After 4 rounds of tagging, review the tracker to see if it’s producing accurate results.
Click Performance overview to see what types of results the tracker would get if it was activated. Learn more about assessing smart tracker performance.
Results look good? Activate the tracker. Set up a stream when you activate it to automatically collect relevant calls and be notified about them. Want more accurate results? Continue training the tracker by completing more rounds of tagging.
Train after activation
After an example-based smart tracker is activated, you can continue training to improve results.
Admin Center > Agent Studio
Hover over AI Tracker and click Settings. Locate the tracker you want to continue training and click Train more.
Tag more rounds of sentences.
When you’re satisfied with the results, click Publish changes to make them live.
Edit after activation
After an example-based smart tracker is activated, edit its filters to apply it to different calls.
Admin Center > Agent Studio
Hover over AI Tracker and click Settings. Locate the tracker you want to edit and click Edit.
Edit the relevant fields and click Publish changes to make them live.
Note
Once you’ve made changes to an active tracker, you’ll see an unpublished changes message at the top of the screen. Click Publish changes to make the changes live.
View tracker results
Once your smart tracker is set up, view results in the following places:
Search page
Team stats
Streams
Initiative boards
Calls API
Calls CSV
Playbook
Deal activity panel
And more
Deep dive
Set up the example-based tracker
Name: This name is displayed everywhere the tracker appears, so choose a name that is short and meaningful. For example, “Pricing objections” is better than “Pricing” or “Customers who say that the price is too high".
Description: Describe the concept behind the tracker so that anyone who sees the tracker understands what it's tracking. This description appears when someone hovers the tracker name.
Mentioned by: Decide whether you want the tracker to be applied to what people at your company say, what the customer says, or anyone.
Want to track how your reps are promoting new initiatives? Select Your company
Want to track what customers are saying about a new product or specific pain points? Select Customers
Give example sentences
Example sentences are what the AI model uses to ‘understand’ what you’re looking for. Provide at least 5 examples of real sentences from real calls that you would want the tracker to find.
Tips for writing great example sentences
Keep the sentences short and precise.
“Tell me a bit more about the main challenges you’re facing right now.
Choose sentences that have different words.
"This is John on a recorded line."
"My name is Claire and I’m calling from a monitored line."
Use a variety of sentence types.
"That’s just not in our price range."
"Could you go any lower on that cost?"
Look for sentences that are specific, not general.
"Our main priority is driving higher conversion rates through the sales funnel."
Make sure each example is a single sentence only.
When copying from transcripts, copy full sentences and edit as necessary.
Tag sentences to train the model
Using the example sentences you provide, we create sets of sentences from your existing calls. Some of these sentences are similar to your sentences; some aren’t. Tag the sentences so the model learns which ones fit your concept and which ones don’t. The sentences you’ll be tagging are in bold. The sentences that are not in bold, before and after the bolded sentences, give you context.
Tag sentences YES if they fit your concept.
Tag sentences NO if they don't fit your concept.
Tag sentences NOT SURE if you’re not sure.
To hear a snippet of the call, click Go to call in the bottom right.
Each round of tagging includes 25 sentences and takes about 10 minutes: This includes the amount of time it takes you to tag the sentences, and the processing time for us to train the model.
Important:
During the first 4 rounds of training, you're building the model, so expect to tag many of the calls NO. This is expected, and part of the training process, because it teaches the model what types of sentences to avoid.
Review the model and evaluate it
After you've completed 4 rounds of tagging, go to Performance overview to review and evaluate the results.
The results you see on this page are an estimation of the tracker performance, and the types of sentences the tracker will surface if you activate it right now.
At the top, you’ll see a performance estimation. Below this are 20 sentences that the model has surfaced in your calls. Unlike previous screens, these aren’t sentences that need to be tagged, and the model isn’t deliberately giving you samples of sentences that don’t match your concept. These are samples of the types of results you’ll get when you activate the tracker.
If a majority of the results fit your concept, activate the tracker. If you’re not satisfied with the results, keep tagging sentences to improve the accuracy.
What should you do if you want better tracker results?
You can continue training the tracker with more rounds of tagging, and by checking the results after every round. During rounds 5 to 10, the results should improve with each round.
What should you do if the tracker results are way off?
If the results aren’t satisfactory, and they don’t get better through rounds 5 to 10, consider creating a new tracker and using different sentences to represent the concept.
Activate the tracker
When you’re satisfied with the model’s results, activate the tracker. You’ll be asked to choose if you want to apply the tracker to upcoming calls only, or calls that happened in the past (up to 12 months).
When you activate the tracker, you can automatically create a stream based on tracker mentions.
If you apply the tracker to calls that happened in the past, it can take up to 24 hours for the results to be processed. You’ll get an email notifying you when the results are ready. Until then, you can view partial results on the Search page.
Note:
For a winning recipe for tracking team initiative adoption with smart trackers, see: Tracking the performance and adoption of strategic initiatives
Assessing example-based smart tracker performance
Wondering how accurate your smart tracker is? Go to Performance overview to see an estimation of its performance based on how the smart tracker performs on sample snippets that were already tagged when someone in your company set up the tracker by tagging snippets.
It’s not an exact measure of how the tracker will perform in real life, but it does provide a fair and straightforward approximation.
Precision & hit rate (recall)
What is precision?
Precision refers to the percentage of smart tracker detections that are correct. For example, when precision is 80% and the smart tracker detects 10 snippets, that means 8 of these snippets are correct detections and 2 of them are false.
Another way of describing precision is by assessing the number of true positives and false positives. True positives are detections that are correct. False positives are detections that are incorrect. When precision is 80%, it means that there were 8 true positives and 2 false positives.
When does high precision matter?
High precision matters when:
You want to reduce false leads
You want to make optimal use of your resources
What is hit rate (recall)?
Hit rate, also known as recall, refers to the percentage of correct snippets that the smart tracker detects out of all the correct snippets that exist. For example, when the hit rate is 70%, it means that the smart tracker detected 7 correct snippets for every 10 correct snippets that actually exist. It missed 3 correct snippets.
Another way of describing hit rate (recall) is by assessing the number of true positives and false negatives. True positives are detections that are correct. False negatives are detections that were missed. When the hit rate (recall) is 70%, it means that there were 7 true positives and 3 false negatives.
When does a high hit rate (recall) matter?
A high hit rate (recall) matters more when:
You want to detect as many relevant snippets as possible
You care less about seeing snippets that aren’t relevant than you care about missing snippets that are relevant
Estimating missed snippets
This estimation is based on test sets of snippets that were tagged when the smart tracker was set up. We already know which snippets match the concept and which ones don’t, so when we run the smart tracker on them, we cross reference the results with what’s already been tagged.
How we calculate these metrics
We calculate these metrics by running the model on snippets that have already been tagged. When the model was trained, at least 100 snippets (4 rounds) were tagged Yes, No, or Not sure. We set aside 10% of these snippets for validation (known as a held-out set) and use the other 90% to train the model.
After training the model, we test it on the held-out set to see how many of the Yes snippets it detects correctly, how many Yes snippets it doesn’t detect, and how many No snippets it detects incorrectly.
We do this once, remix the snippets, set aside a different held-out set, and test the model again. We repeat this process 10 times to get the performance estimation.
Performance estimation FAQs
What is the estimation based on?
The estimation is based on snippets you’ve already tagged. We run the smart tracker on those snippets and base our results accordingly.
When calculating hit rate (recall), how do you know how many snippets the tracker missed?
This estimation is based on test sets of snippets that were tagged when the tracker was set up. We already know which snippets match the concept and which ones don’t, so when we run the tracker on them, we cross reference the results with what’s already been tagged.
Will the precision estimate improve with each additional round of training?
Not necessarily. Each tagging round adds new data to the training set. The model may not yet have learnt the new examples; for example, if they were novel. So, since at every round all tagged data is used to compute the performance metrics, the performance may effectively decrease. Note that the test set after every round is different, so the numbers can't be compared to each other.
The performance estimation went down between one round and the next. How come?
We measure performance using a method called cross-validation. This means that we train the model on parts of your data, and test to see that it correctly identifies the other parts of it based on what you’ve tagged. We do this to align with a real-life situation, in which the model scans conversations it hasn’t seen yet. After the last round, new training data was added, which improves the model performance, but also presents the model with the challenge of identifying new data. It’s normal, therefore, that the numbers go down. As you tag more examples, the model stabilizes and gives the best possible performance.
Smart trackers for playbooks
We’ve built pre-trained smart trackers to support our out-of-the-box playbooks. If you decide to build your own smart trackers for your playbook, you’ll need to take a slightly different approach than you take for other smart trackers. Smart trackers that track initiative adoption or rep performance, for example, tend to focus on the company side of the call. When building smart trackers for playbooks, you probably want to focus on the customer side of the call. Follow these guidelines to build strong, accurate smart trackers for your playbooks.
Train the tracker on the customer side of the call.
In most cases, you’ll want to train the tracker on what the customer is saying, because that’s how elements of the playbook are filled in.Give sample sentences that reflect what the customer says (not the rep).
For example, if you’re building a tracker to identify pain, you’ll want the sample sentences to express pain, not ask about it. For example:
We are experiencing slower than expected growth and changing climate in the space with huge players that are eating our lunch.
We have a challenge to enable and train our sales people on a very technical product.
The biggest challenge is there's three people involved so it's changing hands, and a lot gets lost in that hand change.
*The features available to you depend on your company’s plan and your assigned seats.
Example-based smart tracker FAQ’s
Who can build smart trackers?
If you're an admin, you can build smart trackers. If you're not an admin and you want a smart tracker to fit your business needs, ask a Gong admin to set one up for you.
What’s the difference between keyword trackers and smart trackers?
Keyword trackers are based on specific words, terms, or phrases that are mentioned in your conversations. Smart trackers are AI models trained to identify and find specific concepts, even when they are said using different words and in unexpected ways.
How do I choose whether to build a keyword tracker or an example-based smart tracker?
Each type of tracker has its own advantages, and which one you choose depends on a few considerations.
How many calls do you have? To build a strong example-based smart tracker, you’ll need at least 500 recorded English calls; ideally you should have at least 1500. The AI model behind the example-based smart tracker uses your calls for training, so the more calls you have, the more material you have for training the model.
What’s a good smart tracker concept?
A good concept is one that’s specific, rather than broad. For example, "asking for a discount” is a good concept because it is specific. “Pricing” is not a good concept because it is too broad.
"Asking for a discount" is often expressed in completely different ways, using different words. For example, one customer may say “Is that the best you can do?” while another customer may say “That price is too high. Can you go lower?”
The words used are completely different, and none of them include the word “discount” so these instances would be hard to find with a keyword tracker. A strong smart tracker will be able to surface them.
How many example-based smart trackers can I have?
You can have up to 100 active example-based smart trackers per workspace.
I just activated a smart tracker, but the only place I see it is on the Search page. Why?
If you’ve activated a new smart tracker and applied it to calls that happened in the past, it can take some time to process those calls. The smart tracker will be available on the Search page right away, but it may not appear in other areas yet because we show aggregated data there, and don’t want to show incomplete data as it could be misleading.
How long does it take to set up an example-based smart tracker?
It takes about 40 minutes to set up and train an example-based smart tracker, though this depends on how many rounds of training you do, and how complex and common your concept is. Building a tracker for a simple, common concept such as “asking for consent” will take less time than building one for a more complex concept such as “discovery questions”.
Which languages support example-based smart trackers?
Example-based smart trackers are supported in English only.
Why do I need a minimum number of calls to build an example-based smart tracker?
To build an example-based smart tracker, the concept you want to track must appear in at least 50 calls. Training also requires a minimum of 500 recorded English calls, with the expectation that the concept will be found in those 50 calls. More calls improve accuracy, since the model has more data to learn from. The exact number of calls needed depends on call length and how common the topic is. For new messaging, it may take time before enough examples appear in calls to train the tracker effectively.
How should I use filters to build an example-based smart tracker?
Filters enable you to train the smart tracker on relevant calls only, to make the results more accurate. For example, if you want to track a concept said by your reps, you can filter parts of the calls when your reps are speaking.
Note:
The filters you use to train the tracker are automatically applied to the activated tracker. So, if you don't plan on using the tracker on those same calls, don't use them during training.
Why do I need to use real sentences to train an example-based smart tracker?
The sentences you use to build the smart tracker are the types of sentences the AI model will try to find. If you don’t use sentences that reflect the way people really speak, the model won’t be able to find similar sentences.
Where can I find real sentences for training?
The best example sentences are in your own calls. Go to the Search page and filter by a word/tracker relating to the concept. For example, if your concept is recording consent, filter for these words. Find sentences with these words, and use them to train your smart tracker.
Can I provide negative sentence examples?
No, provide examples that relate positively to your concept. During training, you’ll teach the AI model to avoid sentences that are not related to your concept by tagging those sentences accordingly.
How do I train the example-based model?
Train the model by tagging sample sentences from your own calls. We’ll show you a snippet from a real call and one sentence will be in bold. If that sentence matches your concept, tag it YES. If it doesn’t, tag it NO. If you’re not sure, tag the sentence NOT SURE.
What does the model do with the tags?
YES: The model learns that this kind of sentence fits the concept
NO: The model learns to avoid this kind of sentence. Marking sentences as "No" is important for training the model, as it teaches the model what to avoid.
NOT SURE: The model skips this kind of sentence and does not add it to the data of whether it fits the concept or doesn’t.
Is it OK that I see so many NO tags during training?
Yes, it’s fine! During training, we deliberately show you sentences that don’t match your concept, so that we can train the model to avoid these types of sentences. Tagging lots of sentences NO during training is fine and expected.
How many rounds of tagging does it take to train the model?
It takes at least 4 rounds of tagging to train the model. Each round consists of 25 sentences. After this ‘basic’ training, you can continue improving the model until you’re satisfied with the results.
What happens after each round of tagging?
After each round of tagging, our system considers the tags that you’ve provided (YES, NO, NOT SURE) and chooses new sentences to show for the next round.
The first model is built after round 4, when we take into consideration all of the tags until then. From round 5 onwards, a model is built after each round, and the results you see on the Review page will be updated, and based on the most recent AI model
If I see sentences with transcript errors, should I still tag them?
Yes. Even if the sentences have transcript errors, tag them.
During the training, is it OK if the smart tracker finds unexpected sentences that relate to my concept?
Yes, if they are accurate, they will surface sentences that reflect the concepts you’re interested in, expressed in unexpected ways.
What’s the difference between the results I see when I train the model and when I review it?
During the training, expect to see many sentences that don’t match your concept. This is deliberate, and part of the training. When you review the model, you’re seeing the types of sentences that the tracker will surface when activated. So, you should that a majority of sentences match your concept. If you don’t, keep training the model.
What am I seeing on the Performance overview page?
These are examples of real sentences that the smart tracker surfaced in your calls.
When should I activate the model?
When 15 of the 20 sentences in the Performance overview match your concept, it’s considered a good level of accuracy. Determine the level of accuracy that you want according to your own needs.
How can I improve the model’s results?
To improve the results, do more rounds of training. Remember, during the training rounds, you may be tagging lots of NO tags, and that’s fine. The real proof is in the pudding - in this case, the results you see on Performance overview.
How can I improve a tracker after it's been activated?
Go to the Tracker page, and locate the active tracker you want to improve. Click in the top right corner and click Train more.
When I review the model, the results are way off. What can I do?
You can train the model through more rounds of tagging to improve results. If they don’t seem to be getting better, then we suggest that you try again, creating a new tracker with different sentences to represent the concept.
Can I copy an example-based smart tracker from one workspace into another one?
No, you can't. Smart trackers are unique to the workspace they are built in, since they are based on calls that were saved there.
I built an example-based smart tracker with a specific team filter. Can I change that team?
Yes, you can change the team by editing tracker filters.
Can I see smart tracker info in Salesforce?
Yes. By default, smart tracker information is exported to Salesforce. However, if your Gong admin has deactivated this, you won’t see smart trackers in Salesforce. Smart tracker details can be seen in the Content tab in the Conversation and have a [Smart] prefix.