Artwork

コンテンツは Salesforce and Mike Gerholdt によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、Salesforce and Mike Gerholdt またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Player FM -ポッドキャストアプリ
Player FMアプリでオフラインにしPlayer FMう!

What Are the Key Features of Salesforce's Model Builder?

29:26
 
シェア
 

Manage episode 430654158 series 2794780
コンテンツは Salesforce and Mike Gerholdt によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、Salesforce and Mike Gerholdt またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal

Today on the Salesforce Admins Podcast, it’s another deep dive with Josh Birk as he talks to Bobby Brill, Senior Director of Product for Einstein Discovery.

Join us as we chat about how you can use Model Builder to harness the power of AI with clicks, not code.

You should subscribe for the full episode, but here are a few takeaways from our conversation with Bobby Brill.

What is Model Builder?

Bobby started his career at Salesforce in Customer Success before working on Wave Analytics. These days, he’s the Senior Director of Product for Einstein Discovery, and he’s here to talk about what Model Builder can do for your business.

If you have Data Cloud, then you already have access to Model Builder via the Einstein Studio Tab. With it, you can create predictive models with clicks, not code, using AI to look through your data and generate actionable insights. As Bobby says, the AI isn’t really the interesting part—it’s how you can use it as a tool to solve your business problems.

BYOM - Build Your Own Model

In traditional machine learning, models are trained on data to identify successful and unsuccessful trends, which is fundamental for making accurate predictions. For example, if you want to create an opportunity scoring model, you need to point it to the data you have on which leads converted and which leads didn’t pan out.

Model Builder lets you do just that, building your own model based on the data in your org. What’s more, it fits seamlessly into the structures admins already understand. We can put our opportunity scoring model into a flow to sort high-scoring leads into a priority queue. And we can do all of this with clicks, not code.

Building a predictive model that’s good enough

Einstein’s LLM capabilities offer even more possibilities when it comes to using your data with Model Builder. You can process unstructured texts like chats or emails to do something like measure if a customer is becoming unhappy. And you can plug that into a flow to do something to fix it.

One thing that Bobby points out is that building a model is an iterative process. If you have 100% accuracy, you haven’t really created a predictive model so much as a business rule. As long as the impact of a wrong decision is manageable, it’s OK to build something that’s good enough and know that it will improve over time.

There’s a lot more great stuff from Bobby about how to build predictive models and what’s coming next, so be sure to listen to the full episode. And don’t forget to subscribe to hear more from the Salesforce Admins Podcast.

Podcast swag

Learn more

Admin Trailblazers Group

Social

Full show transcript

Josh: Hello, everybody. Your guest host Josh Birk here. Today we are going to talk to Bobby Brill about Model Builder, which is going to allow you to create your own predictive and generative models to use within Salesforce. So without further delay, let's head on over to Bobby. All right, today on the show we welcome Bobby Brill to talk about Model Builder. Do you prefer Robert, Bob, Bobby? What do you like to go by?

Bobby: It's an excellent question. So I'm a junior. My dad is Robert Howard Brill Sr. I have the same first middle and last name. He goes by Robert, Rob, or Bob, so I've always been Bobby my whole life.

Josh: Yeah, I feel you. My brother is Peter. My father was a Carl Peter and my grandfather was a Carl Peter.

Bobby: Wow.

Josh: Got very confusing sometimes. Yeah, yeah. So introduce yourself a little bit to the crowd. What do you do at Salesforce?

Bobby: That's a great question. I've been at Salesforce almost 13 years. I was a customer of Salesforce for about three and a half years prior to joining, so I've been in the ecosystem for quite some time.

Josh: Got it.

Bobby: I started off in actually customer success group, actually it was called Customers for Life. So I worked with customers getting on boarded onto Salesforce. I joined the product team back in 2015 in analytics, so we had this thing called Wave Analytics. So even well before AI I've been working with data. The last year I've actually been part of the data cloud team, so I do AI for data cloud, so it's called Model Builder.

Josh: Got it. Got it. Were you interested in AI before it blew up, before it got all big?

Bobby: Am I interested in AI? I think it's interesting. I think it's really cool technology, but what I really like is how the technology can help our customers solve their business problems. I was a customer, I understood what it was like to just have this tool available and put my data in and what can I do with that data. What I like is showing customers how AI can help them achieve their business goals. I really focus on how the AI helps business goals versus really caring about all the new technology and all the new models that are out. I've got other people that do that. I focus in on how are these models going to be used.

Josh: Chasing solutions and not trends.

Bobby: Correct.

Josh: Like it. Now, before we get into the product, one other question, I just like to ask people this because in technology I find the answers are so varied, was software something you always wanted to get into?

Bobby: Yes. I actually had a computer science degree, so I was writing software. What I realized is, while writing software is fun, I actually really like to debug software still, what I really enjoy is coming up with the ideas of what software should do or how it can help solve problems. Product management has really been the thing for me. When I started Salesforce, I just wanted to get into the company any way I could, so I didn't try for a product manager position-

Josh: Got it.

Bobby: ... but the second I got in, I had to figure out how to get to this position.

Josh: I like it. From a very high level, what is your elevator pitch for Model Builder?

Bobby: Okay, elevator pitch for Model Builder is build predictive models with clicks, not code. It started with actually predictive models. Now that GenAI is available, it's utilize custom, predictive, or generative models with clicks, not code.

Josh: Okay. Now, when we say model, how do you describe that within the input and output of how we interact with an AI?

Bobby: That's a great question, I don't think anyone's really asked me this specifically. But I think the way I would best describe it is a model is just a function. You first want to know what do you want that function to do. You have to understand what that function is capable of doing. AI is only as good as what the model is capable of doing. So in traditional machine learning, you would have a model that perhaps could tell you what is the likelihood of this lead to convert. And how did it understand that? Well, it had to get some examples of what did conversion look like, give me some leads that successfully converted, give me some lead that didn't, so the model can understand what are the trends for a successful outcome or a non-successful outcome. That was traditional machine learning. You'd have to train your model. Now, large language models are really good at putting sentences together. It understood text, it's read so much text, it's trained on that, and it knows when it sees certain words, here's the potential. It can predict the next word and the next sets of words to come out. And so if you think of models as just it's a function and you're going to give it some input and it's going to give you an output, what that function can do is totally dependent and there's so many different use cases. But that's I think how I would best describe a model, is it's a function.

Josh: Gotcha. Now let's talk a little bit about building models with clicks, not code. I'm trying to think of the right way to ask this. Let's start with what's your basic user scenario of something that they're going to try to build?

Bobby: So thankfully when we're talking about models, it's all around business data. We are a company that sells to businesses. They put their data in our systems, and while a model can do lots of things, we try to focus on what are the things that our customers are likely doing. The easiest one, Salesforce has had Sales Cloud the longest, so you would build an opportunity scoring model. And that is nothing more than a predictive model that understands what are the traits that go towards an opportunity that's going to close, or win I guess, versus an opportunity that's going to lose. That's probably the simplest thing, and this is what machine learning has really done over the past probably 20 years. People have been solving this problem forever. But every single customer wants this, and they want to make sure that it's trained on their data. They use Salesforce because they can fully customize how they want that data to be stored, what object. They're going to have relationships across other objects. It's not going to be everything in an opportunity object. It's going to be across multiple things. And they want to make sure it's their data. So why they don't want to use an out-of-the-box model is they don't know what goes into that. Some people like that, but our large enterprises, they like to understand what goes in that. So by giving our customers control and just saying, "Tell us where this data is," we will then go train that model, and we can predict the likelihood of an opportunity closing or take Service Cloud, predict the likelihood of a case escalating or processes, business processes are really important, predict the time it's going to take for an opportunity to close or go from stage one to stage two or service case from the time it was created till when we think it's going to be predicted resolution. These are all things that I think are bread and butter to Salesforce and things that they can predict. And then again, that's your traditional machine learning, that's where you're going to need to use your data to train that model.

Josh: I think it's very interesting because as you say, this isn't a brand new problem, these are questions people have had and have tried to answer,. Right now I'm imagining the world's worst formula field that's trying to connect 17 different data points and make a guess about the probability of an opportunity closing.

Bobby: Exactly, yes.

Josh: How would you describe the level of precision that you're seeing from Model Builder these days?

Bobby: The level precision depends on the data. Some models can be really accurate, but if you have a predictive model that's 100% accurate, then it's not a predictive model, it's some business rule. You've basically told the model, "Look at this field. When you see this value, 100% of the time it's going to be a converted opportunity or... " Sorry, I guess a closed one opportunity. "And when you see this variable, it's always going to be a loss." So there's a lot of times this is data leakage. This is very common in machine learning where you introduce something that basically the model just looks at that and it's like, "I know what I'm going to do." So you never want it to be perfectly accurate. And then there's other levels of accuracy. You could say that, "60% accurate, is that good enough?" Well, it's better than a coin flip, so you are already getting some uplift.

Josh: Right.

Bobby: So then really it's up to the business to figure out what is the impact of a wrong prediction. And a lot of times the businesses, they know the impact of that wrong prediction. If it helps you prioritize the things faster, great. Then start with something that's, let's say, 60% accurate and then work towards something that is a little bit more accurate. It's an iterative process, so try to not be afraid of doing those iterations because you can get some uplift.

Josh: Yeah. I'm going to ask you the world's most leading question, but it's something that we keep trying to get people to think about, because when it comes to the data that the model is going to leverage, there's size, but there's also quality. How important is the concept of clean data to getting that prediction model?

Bobby: Clean data is very important to getting a good model. However, I don't think there's any company that thinks they have clean data. They all think their data's terrible. I think if you were to look at Salesforce, I mean, we know the data really well here, and I wouldn't say that it's clean. But I think you could argue that you have enough clean data to train these models. So it really depends on the use case.

Josh: Got it.

Bobby: If we're talking about sales data, you probably have a lot. Service data, that's probably the cleanest data out there. Service processes are very much you got to get the data in, you work on SLAs, there's very much these touch points. That data is really good. So if you ever want to try something like, "Where is my data the cleanest?" I guarantee you it's service. Sale is people don't enter things in right.

Josh: Okay, so I really love that messaging because it's not that cleanliness isn't important, but you don't need perfection to start using these tools.

Bobby: Right. And then I will say that with generative AI and the ability to process a lot of, I'm going to call it, unstructured text, let's say chats or email, and getting information out of that is perfect for actually cleaning up your data or even putting it into a predictive model. Then the next thing is layer these two things together. There's going to be a data cleanup. You're going to be training a model, but then when you're actually delivering the predictions, you don't want to have to worry about cleaning up that data. That's where the LLMs can be used. Something comes in, you get a signal that says, "Hey, this customer, let's say, their sentiment is dropping." Well, how do you know their sentiment is dropping? Because an LLM is figuring it out and saying, "The customer's not happy," and the models are really good at understanding that, which is pretty cool.

Josh: That's a really interesting point. I've actually not tried to really consider that before, because to an LLM, let's take one of the most common data quality problems, if you have, say, redundant fields or you have duplicate data, the LLM isn't actually as worried about that as, say, a standard reporting tool would be.

Bobby: That's correct. Yep.

Josh: So I think we've been touching on it as we've been talking about this, but where do you see the role of an admin being when it comes to constructing and maintaining these models?

Bobby: The best part about Model Builder, which I haven't even talked about, is how it integrates back with Salesforce. What we've tried to do is we give you this tool where you can build this really, really good function. It's got an input, it's got an output. As long as admins understand that there's going to be some inputs needing to go into this and it's going to have an output, you can actually put the models wherever you want. The same way that you're building, let's say, a flow, and within the flow there's a decision tree, admins know how that works really well, or even the admins that can write a little bit of Apex code. So as long as you know how to do that, as long as you know how these different Salesforce functions are coming together, models are going to be just another input to that. So take a flow with a decision. Perhaps it's a case escalation... Or no, let's not even take case escalation, let's talk about leads and lead prioritization. You built a flow, you want to put leads in the right queue. Well, what if before you even put a lead into a queue you can predict the likelihood that this lead is going to be converted. You can say, "Hey, everything that has a score between, let's say, 80 and 100, 100 being the highest score, maybe you want to route that to a special queue. You understand queues, you understand decision trees in a flow. So now all you need to know is, hey, I have this score. How do I get the score? Maybe there's another team that figured that out or maybe you were comfortable enough because you know the data to actually train that model. Now you can just use this as a decision. You don't have to actually show the prediction to anyone. Who cares if that prediction is written anywhere. You don't need that for anything, you just need it at that point. So admins should start thinking about this just says another function. I think flow is a great way to look at it because a process. Something goes through step one, you do this, step two, you do that, and so on. And that model might be just part of that process.

Josh: When you're saying that's interacting with flow, are you saying that it's like I have a custom object, I have a custom field, I can make a decision tree based on that? Is it that same level of implementation?

Bobby: It could be. You don't have to write that prediction out anywhere. We can actually generate it live within that flow. So let's say a lead comes in, you kick off a flow, so you have a lead form, the lead comes in, it goes through a flow. You're not sure where that lead is going to go. You technically I guess created the record, and then you want to figure out where does this lead go. Well, you don't need to score it and write that score back to the lead. You can actually within the flow call our model endpoint so we can get an on-demand prediction. We're going to give you that on-demand prediction and we can route it somewhere. What's really cool within a flow, you can also call LLM models. So perhaps the lead comes in, you have some unstructured text, maybe you care about sentiment, maybe you want to understand what's the intent from some texts, an LLM theoretically can go do that. And then you get the output of that LLM and you pass that into the model. Now we know more about this lead or this person and then make a prediction, then file the lead away in a queue. That prediction becomes sort of, I'm going to call it, throwaway. You don't need to use it anywhere.

Josh: Got it. It's a fun rule [inaudible 00:15:42]. You get the data-

Bobby: Correct.

Josh: ... on demand and then... Yes, gotcha. Now I think we just touched on sort of two different forms of Model Builder. We have the clicks, declarative generate your own model, and then we have bringing in an LLM, and I think this is what we keep referring to as bring your own model. What does bring your own model look like and what kind of models are we supporting there?

Bobby: That's a good question. When I talked about what's the value of Model Builder, my elevator pitch, it was all about building stuff with clicks. And that's because we're really allowing all of our customers to have access to this stuff. But the reality is there's only so much you can do with clicks and then the rest you're going to do with code. We have this idea of bring your own model, whether it's a predictive model or an LLM. You're just connecting these models that live in your infrastructure, customer's infrastructure, whether it's SageMaker, Vertex, Databricks, or maybe it's your Azure OpenAI model, or maybe it's your Google Gemini model. We're giving you the ability to just connect these models directly to Salesforce so you can operationalize them the same way that you would as if it was built on the platform. So you'd have full platform functionality, but the models themselves, they're not hosted in Salesforce. So there's all kinds of things you can do with that. Your data science team can make sure that they have full control. Let's say they fine tune the LLM so it talks specifically in your brand language, for instance. That's a use case.

Josh: Gotcha.

Bobby: We want to give customers the ability to do this on the platform as well. So the same clicks, not code, we want to bring that to LLMs. That's a future thing. We want to give that capability.

Josh: I'm going to make a comparison here, and I'm going to be a little controversial to my artist friends who I've had these arguments with, but I know artists who have actually built their own LLM model based on their own art, and then they're treating these models as their little AI buddy to try different things very quickly and then kind of motivates them in a very specific direction. Is that a quality comparison to what you're seeing people are doing when building their own models?

Bobby: It's a good question. I don't know that that's... Well, I don't know. I think what we are seeing, so brand voice for sure is something that people want an LLM to do. They put sentences together really well, but if you are distributing anything to your customers, you want to make sure that the sentences that are generated are on point to how you would speak as if it was a regular person. So fine-tuning that with specific words and phrases, that's what we're starting to see some customers do with their own LLM. But we're also seeing that there's other techniques, retrieval augmented generation, or RAG. People call it RAG, which I feel like is a... I can't just say RAG without saying retrieval augmented generation to customers because I don't want to be looked at like I'm a crazy. But then also-

Josh: It is a sort of unfortunate acronym. You're correct, yeah.

Bobby: It is. But I guess it's getting common, so I'm correcting it... Or not correcting, I'm not saying the full thing as often. But we're finding that that is another approach to not having to train those models. I think research is still out on which is the most effective mechanism because you can say at the time that you want that LLM to process something, say, here's some examples. So you don't have to train because training an LLM is pretty expensive right now.

Josh: Yeah. Both from a quantity and processing point of view, right?

Bobby: That's correct.

Josh: Yeah. Take that one step further for me. How does RAG change the game a little bit?

Bobby: With the ability to quickly find some examples of things that you're looking for... Okay, let's say you are replying back to a customer for customer service, you want to automate it. So customer asks a question, and the LLM obviously can't really answer the question unless you provide it some information. So you could give it some knowledge right away. So first, find some similar cases, find the resolution of these cases, and summarize that and go back to the customer. So simply by searching for certain resolutions and responding back or summarizing those resolutions, you already have brand voice because those resolutions themselves we're assuming that was all typed in by someone who understands how you're supposed to respond back. And then, let's say the LLM responds back, it's already similar, and that gets recorded as the resolution. Now the next you're responding back it already sort of knew how to respond and the next time if you're searching for similar things, you'll probably get the same kind of response back. Did that make sense?

Josh: It did. It did actually. Yeah. It's like, as a fishing analogy, you're fishing in the sea that you already have. You're bringing in examples that have already been contextualized within your data and you're just like, "Go ahead and just start there." Does that sound accurate?

Bobby: Exactly. Yeah, that's exactly it. And then the other thing is, as you're responding back you could... Because when you're talking to an LLM, you have to generate this prompt. I know this isn't part of the subject here, but Prompt Builder is another great tool that's on top of Model Builder where you basically tell the LLM what you want do. You program that LLM, and you can insert the retrieval augmented generation wherever you want. It's like you're building an email template and you're just saying, "Hey, here's some similar cases." And then around that within the Prompt Builder you can say, "Here are some examples summarized like this."

Josh: Got it.

Bobby: So you're using this LLM as if it's an assistant that can go do something for you and you give it a bunch of instructions and you put that all in one place. It's pretty cool.

Josh: Yeah, no, it's okay. I still get another nickel if we say Prompt Builder, so it's a good advertisement.

Bobby: Perfect.

Josh: And on that note, so I'm thinking of the Prompt Builder interface and where you build out the prompt and then over on the right we've got like, "Here are the models you can use." So are we going to make that portion transparent to Model Builder and be like, "Oh, hey, my data science team created this specialized model based on our marketing, our brand, our voice. Please use that instead of, say, OpenAPI... or OpenAI 3.0 or something like that."

Bobby: Yep. So that's actually in there today. So if you're using Prompt Builder, when you look at the models, there's a drop-down. The default is going to be the standard model. There's a drop-down there, you can change that to custom models. Once you change that to custom models, it's any other thing that shows up in Model Builder. So this could just be, let's say, OpenAI 3.5 turbo and you've configured it slightly, you've changed the temperature, one of those parameters. We have a model playground that allows you to do that. Or it's a LLM that you brought in. So whether it's the ones that we have, GA today or the ones that are coming, it's a model that's your own and you have full control. So then that just shows up in Prompt Builder and you build the prompt. In the future, we're looking at how to, I don't know, give you more controls over which LLM should show up in Prompt Builder versus the ones that you don't want to have show up. So while today it's everything, we know that our customers want that finer granularity, so we're thinking about that.

Josh: Got it. Well, let's touch on those two points. What is availability for Model Builder looking like today?

Bobby: Model Builder, it's actually packaged with Data Cloud. So if you have Data Cloud... I didn't say buy Data Cloud because Data Cloud is now being packaged with many different things. It's a consumption-based pricing model, so this is new to a lot of our customers. But what's cool about doing a consumption-based model for pricing is this tool can just be there. We want Data Cloud to be an extension of the platform, just like you're building custom objects and things like that, we want Data Cloud to be as easy as that. It's just there for everyone. It's a tab called Einstein Studio within Data Cloud. That name may or may not change in the future, so just bear with me if it does. I know we're talking about Model Builder and we have a tab called Einstein Studio, and we like to say Einstein 1 Copilot Studio. I love marketing at Salesforce. It's fun because it changes and I'm like, "I got to just go with it." So Einstein Studio, it's packaged with Data Cloud. So you get Data Cloud, you go into the Data Cloud app, you find Einstein Studio. But it's just a tab. So just like you can find the Reports tab and any app that you want, you can put Einstein Studio in whatever app you want. So if you're an admin, it's just a tab, you'll find it. It's only there if you have Data Cloud turned on in your org, but that is currently how it's packaged. If that's the future, whether it changes, who knows?

Josh: Who knows? I do feel like if there's one thing our audience has learned if they've been in the Salesforce ecosystem for even half a second, is that all things might change. They might change their name, they might change their location, they might change their pricing. So if you're listening to this and you're interested, please check out your health documents or talk to your account executive. Speaking of things that might change, anything on the roadmap you want to give a shout-out to?

Bobby: Model Builder itself, I mean, there's lots of things we're doing with Model Builder just in this release. Actually here, this is really important, for all you admins out there, we are working as fast as we can to get features out. We are no longer on the Salesforce three-release cycle. We are going to be coming out with stuff on some monthly cycle. You're going to see that across all AI. You're going to see that across Data Cloud. We're coming out with things just on a different cycle, so please bear with us. I know how difficult it is even to keep up with our three releases, so just bear with us. We, in fact, have a release coming up very soon with Model Builder for some of the predictive AI stuff. We're making it easier so that you can build models with clicks even easier than you could before. I would say there's nothing earth-shattering there, but we're making it easier. You're going to see a lot more LLMs that you can bring. You're also going to see a lot more default LLMs, ones that are just shipped. We have a handful of models today from OpenAI and Azure OpenAI. You're going to start to see ones from other vendors as well. So they're just going to show up, everyone just has access to it.

Josh: Got it.

Bobby: And configuring those models within flows and prompts and all these things, it's just going to get a lot easier. So please bear with us. Keep up with the release notes because release notes are only three times a year. We're just updating release notes mid-release, which is weird.

Josh: Got it.

Bobby: Trust me, I know this is weird because I've been around a long time and I keep asking myself, "Should we be doing this?" And you know what? We're doing it, so here we are.

Josh: Not to panic anybody, it feels like a fundamental change that Salesforce might be evolving to in the long run. So everybody obviously can keep your eyes on admin.salesforce.com, and we will try to keep you in the loop as those changes make. And Bobby, do we have Trailhead content on this?

Bobby: Yes. In fact, we just came out with a Trailhead for Model Builder, just the predictive model piece. I think there's some coming for LLMs in the future, but just the predictive model piece that just shipped, so take a look.

Josh: Sounds great. Bobby, thank you so much for the great conversation and information. That was a lot of fun.

Bobby: Absolutely. Thanks for having me.

Josh: Once again, I want to thank Bobby for joining us and telling us all the great things about Model Builder. Now, if you want to learn more about Model Builder and of course Salesforce in general, head on over to admin.salesforce.com, where you can hear more of this show, and also, of course, our friend Trailhead for learning about the Salesforce platform. Once again, everybody, thank you for listening, and we'll talk to you next week.

  continue reading

153 つのエピソード

Artwork
iconシェア
 
Manage episode 430654158 series 2794780
コンテンツは Salesforce and Mike Gerholdt によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、Salesforce and Mike Gerholdt またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal

Today on the Salesforce Admins Podcast, it’s another deep dive with Josh Birk as he talks to Bobby Brill, Senior Director of Product for Einstein Discovery.

Join us as we chat about how you can use Model Builder to harness the power of AI with clicks, not code.

You should subscribe for the full episode, but here are a few takeaways from our conversation with Bobby Brill.

What is Model Builder?

Bobby started his career at Salesforce in Customer Success before working on Wave Analytics. These days, he’s the Senior Director of Product for Einstein Discovery, and he’s here to talk about what Model Builder can do for your business.

If you have Data Cloud, then you already have access to Model Builder via the Einstein Studio Tab. With it, you can create predictive models with clicks, not code, using AI to look through your data and generate actionable insights. As Bobby says, the AI isn’t really the interesting part—it’s how you can use it as a tool to solve your business problems.

BYOM - Build Your Own Model

In traditional machine learning, models are trained on data to identify successful and unsuccessful trends, which is fundamental for making accurate predictions. For example, if you want to create an opportunity scoring model, you need to point it to the data you have on which leads converted and which leads didn’t pan out.

Model Builder lets you do just that, building your own model based on the data in your org. What’s more, it fits seamlessly into the structures admins already understand. We can put our opportunity scoring model into a flow to sort high-scoring leads into a priority queue. And we can do all of this with clicks, not code.

Building a predictive model that’s good enough

Einstein’s LLM capabilities offer even more possibilities when it comes to using your data with Model Builder. You can process unstructured texts like chats or emails to do something like measure if a customer is becoming unhappy. And you can plug that into a flow to do something to fix it.

One thing that Bobby points out is that building a model is an iterative process. If you have 100% accuracy, you haven’t really created a predictive model so much as a business rule. As long as the impact of a wrong decision is manageable, it’s OK to build something that’s good enough and know that it will improve over time.

There’s a lot more great stuff from Bobby about how to build predictive models and what’s coming next, so be sure to listen to the full episode. And don’t forget to subscribe to hear more from the Salesforce Admins Podcast.

Podcast swag

Learn more

Admin Trailblazers Group

Social

Full show transcript

Josh: Hello, everybody. Your guest host Josh Birk here. Today we are going to talk to Bobby Brill about Model Builder, which is going to allow you to create your own predictive and generative models to use within Salesforce. So without further delay, let's head on over to Bobby. All right, today on the show we welcome Bobby Brill to talk about Model Builder. Do you prefer Robert, Bob, Bobby? What do you like to go by?

Bobby: It's an excellent question. So I'm a junior. My dad is Robert Howard Brill Sr. I have the same first middle and last name. He goes by Robert, Rob, or Bob, so I've always been Bobby my whole life.

Josh: Yeah, I feel you. My brother is Peter. My father was a Carl Peter and my grandfather was a Carl Peter.

Bobby: Wow.

Josh: Got very confusing sometimes. Yeah, yeah. So introduce yourself a little bit to the crowd. What do you do at Salesforce?

Bobby: That's a great question. I've been at Salesforce almost 13 years. I was a customer of Salesforce for about three and a half years prior to joining, so I've been in the ecosystem for quite some time.

Josh: Got it.

Bobby: I started off in actually customer success group, actually it was called Customers for Life. So I worked with customers getting on boarded onto Salesforce. I joined the product team back in 2015 in analytics, so we had this thing called Wave Analytics. So even well before AI I've been working with data. The last year I've actually been part of the data cloud team, so I do AI for data cloud, so it's called Model Builder.

Josh: Got it. Got it. Were you interested in AI before it blew up, before it got all big?

Bobby: Am I interested in AI? I think it's interesting. I think it's really cool technology, but what I really like is how the technology can help our customers solve their business problems. I was a customer, I understood what it was like to just have this tool available and put my data in and what can I do with that data. What I like is showing customers how AI can help them achieve their business goals. I really focus on how the AI helps business goals versus really caring about all the new technology and all the new models that are out. I've got other people that do that. I focus in on how are these models going to be used.

Josh: Chasing solutions and not trends.

Bobby: Correct.

Josh: Like it. Now, before we get into the product, one other question, I just like to ask people this because in technology I find the answers are so varied, was software something you always wanted to get into?

Bobby: Yes. I actually had a computer science degree, so I was writing software. What I realized is, while writing software is fun, I actually really like to debug software still, what I really enjoy is coming up with the ideas of what software should do or how it can help solve problems. Product management has really been the thing for me. When I started Salesforce, I just wanted to get into the company any way I could, so I didn't try for a product manager position-

Josh: Got it.

Bobby: ... but the second I got in, I had to figure out how to get to this position.

Josh: I like it. From a very high level, what is your elevator pitch for Model Builder?

Bobby: Okay, elevator pitch for Model Builder is build predictive models with clicks, not code. It started with actually predictive models. Now that GenAI is available, it's utilize custom, predictive, or generative models with clicks, not code.

Josh: Okay. Now, when we say model, how do you describe that within the input and output of how we interact with an AI?

Bobby: That's a great question, I don't think anyone's really asked me this specifically. But I think the way I would best describe it is a model is just a function. You first want to know what do you want that function to do. You have to understand what that function is capable of doing. AI is only as good as what the model is capable of doing. So in traditional machine learning, you would have a model that perhaps could tell you what is the likelihood of this lead to convert. And how did it understand that? Well, it had to get some examples of what did conversion look like, give me some leads that successfully converted, give me some lead that didn't, so the model can understand what are the trends for a successful outcome or a non-successful outcome. That was traditional machine learning. You'd have to train your model. Now, large language models are really good at putting sentences together. It understood text, it's read so much text, it's trained on that, and it knows when it sees certain words, here's the potential. It can predict the next word and the next sets of words to come out. And so if you think of models as just it's a function and you're going to give it some input and it's going to give you an output, what that function can do is totally dependent and there's so many different use cases. But that's I think how I would best describe a model, is it's a function.

Josh: Gotcha. Now let's talk a little bit about building models with clicks, not code. I'm trying to think of the right way to ask this. Let's start with what's your basic user scenario of something that they're going to try to build?

Bobby: So thankfully when we're talking about models, it's all around business data. We are a company that sells to businesses. They put their data in our systems, and while a model can do lots of things, we try to focus on what are the things that our customers are likely doing. The easiest one, Salesforce has had Sales Cloud the longest, so you would build an opportunity scoring model. And that is nothing more than a predictive model that understands what are the traits that go towards an opportunity that's going to close, or win I guess, versus an opportunity that's going to lose. That's probably the simplest thing, and this is what machine learning has really done over the past probably 20 years. People have been solving this problem forever. But every single customer wants this, and they want to make sure that it's trained on their data. They use Salesforce because they can fully customize how they want that data to be stored, what object. They're going to have relationships across other objects. It's not going to be everything in an opportunity object. It's going to be across multiple things. And they want to make sure it's their data. So why they don't want to use an out-of-the-box model is they don't know what goes into that. Some people like that, but our large enterprises, they like to understand what goes in that. So by giving our customers control and just saying, "Tell us where this data is," we will then go train that model, and we can predict the likelihood of an opportunity closing or take Service Cloud, predict the likelihood of a case escalating or processes, business processes are really important, predict the time it's going to take for an opportunity to close or go from stage one to stage two or service case from the time it was created till when we think it's going to be predicted resolution. These are all things that I think are bread and butter to Salesforce and things that they can predict. And then again, that's your traditional machine learning, that's where you're going to need to use your data to train that model.

Josh: I think it's very interesting because as you say, this isn't a brand new problem, these are questions people have had and have tried to answer,. Right now I'm imagining the world's worst formula field that's trying to connect 17 different data points and make a guess about the probability of an opportunity closing.

Bobby: Exactly, yes.

Josh: How would you describe the level of precision that you're seeing from Model Builder these days?

Bobby: The level precision depends on the data. Some models can be really accurate, but if you have a predictive model that's 100% accurate, then it's not a predictive model, it's some business rule. You've basically told the model, "Look at this field. When you see this value, 100% of the time it's going to be a converted opportunity or... " Sorry, I guess a closed one opportunity. "And when you see this variable, it's always going to be a loss." So there's a lot of times this is data leakage. This is very common in machine learning where you introduce something that basically the model just looks at that and it's like, "I know what I'm going to do." So you never want it to be perfectly accurate. And then there's other levels of accuracy. You could say that, "60% accurate, is that good enough?" Well, it's better than a coin flip, so you are already getting some uplift.

Josh: Right.

Bobby: So then really it's up to the business to figure out what is the impact of a wrong prediction. And a lot of times the businesses, they know the impact of that wrong prediction. If it helps you prioritize the things faster, great. Then start with something that's, let's say, 60% accurate and then work towards something that is a little bit more accurate. It's an iterative process, so try to not be afraid of doing those iterations because you can get some uplift.

Josh: Yeah. I'm going to ask you the world's most leading question, but it's something that we keep trying to get people to think about, because when it comes to the data that the model is going to leverage, there's size, but there's also quality. How important is the concept of clean data to getting that prediction model?

Bobby: Clean data is very important to getting a good model. However, I don't think there's any company that thinks they have clean data. They all think their data's terrible. I think if you were to look at Salesforce, I mean, we know the data really well here, and I wouldn't say that it's clean. But I think you could argue that you have enough clean data to train these models. So it really depends on the use case.

Josh: Got it.

Bobby: If we're talking about sales data, you probably have a lot. Service data, that's probably the cleanest data out there. Service processes are very much you got to get the data in, you work on SLAs, there's very much these touch points. That data is really good. So if you ever want to try something like, "Where is my data the cleanest?" I guarantee you it's service. Sale is people don't enter things in right.

Josh: Okay, so I really love that messaging because it's not that cleanliness isn't important, but you don't need perfection to start using these tools.

Bobby: Right. And then I will say that with generative AI and the ability to process a lot of, I'm going to call it, unstructured text, let's say chats or email, and getting information out of that is perfect for actually cleaning up your data or even putting it into a predictive model. Then the next thing is layer these two things together. There's going to be a data cleanup. You're going to be training a model, but then when you're actually delivering the predictions, you don't want to have to worry about cleaning up that data. That's where the LLMs can be used. Something comes in, you get a signal that says, "Hey, this customer, let's say, their sentiment is dropping." Well, how do you know their sentiment is dropping? Because an LLM is figuring it out and saying, "The customer's not happy," and the models are really good at understanding that, which is pretty cool.

Josh: That's a really interesting point. I've actually not tried to really consider that before, because to an LLM, let's take one of the most common data quality problems, if you have, say, redundant fields or you have duplicate data, the LLM isn't actually as worried about that as, say, a standard reporting tool would be.

Bobby: That's correct. Yep.

Josh: So I think we've been touching on it as we've been talking about this, but where do you see the role of an admin being when it comes to constructing and maintaining these models?

Bobby: The best part about Model Builder, which I haven't even talked about, is how it integrates back with Salesforce. What we've tried to do is we give you this tool where you can build this really, really good function. It's got an input, it's got an output. As long as admins understand that there's going to be some inputs needing to go into this and it's going to have an output, you can actually put the models wherever you want. The same way that you're building, let's say, a flow, and within the flow there's a decision tree, admins know how that works really well, or even the admins that can write a little bit of Apex code. So as long as you know how to do that, as long as you know how these different Salesforce functions are coming together, models are going to be just another input to that. So take a flow with a decision. Perhaps it's a case escalation... Or no, let's not even take case escalation, let's talk about leads and lead prioritization. You built a flow, you want to put leads in the right queue. Well, what if before you even put a lead into a queue you can predict the likelihood that this lead is going to be converted. You can say, "Hey, everything that has a score between, let's say, 80 and 100, 100 being the highest score, maybe you want to route that to a special queue. You understand queues, you understand decision trees in a flow. So now all you need to know is, hey, I have this score. How do I get the score? Maybe there's another team that figured that out or maybe you were comfortable enough because you know the data to actually train that model. Now you can just use this as a decision. You don't have to actually show the prediction to anyone. Who cares if that prediction is written anywhere. You don't need that for anything, you just need it at that point. So admins should start thinking about this just says another function. I think flow is a great way to look at it because a process. Something goes through step one, you do this, step two, you do that, and so on. And that model might be just part of that process.

Josh: When you're saying that's interacting with flow, are you saying that it's like I have a custom object, I have a custom field, I can make a decision tree based on that? Is it that same level of implementation?

Bobby: It could be. You don't have to write that prediction out anywhere. We can actually generate it live within that flow. So let's say a lead comes in, you kick off a flow, so you have a lead form, the lead comes in, it goes through a flow. You're not sure where that lead is going to go. You technically I guess created the record, and then you want to figure out where does this lead go. Well, you don't need to score it and write that score back to the lead. You can actually within the flow call our model endpoint so we can get an on-demand prediction. We're going to give you that on-demand prediction and we can route it somewhere. What's really cool within a flow, you can also call LLM models. So perhaps the lead comes in, you have some unstructured text, maybe you care about sentiment, maybe you want to understand what's the intent from some texts, an LLM theoretically can go do that. And then you get the output of that LLM and you pass that into the model. Now we know more about this lead or this person and then make a prediction, then file the lead away in a queue. That prediction becomes sort of, I'm going to call it, throwaway. You don't need to use it anywhere.

Josh: Got it. It's a fun rule [inaudible 00:15:42]. You get the data-

Bobby: Correct.

Josh: ... on demand and then... Yes, gotcha. Now I think we just touched on sort of two different forms of Model Builder. We have the clicks, declarative generate your own model, and then we have bringing in an LLM, and I think this is what we keep referring to as bring your own model. What does bring your own model look like and what kind of models are we supporting there?

Bobby: That's a good question. When I talked about what's the value of Model Builder, my elevator pitch, it was all about building stuff with clicks. And that's because we're really allowing all of our customers to have access to this stuff. But the reality is there's only so much you can do with clicks and then the rest you're going to do with code. We have this idea of bring your own model, whether it's a predictive model or an LLM. You're just connecting these models that live in your infrastructure, customer's infrastructure, whether it's SageMaker, Vertex, Databricks, or maybe it's your Azure OpenAI model, or maybe it's your Google Gemini model. We're giving you the ability to just connect these models directly to Salesforce so you can operationalize them the same way that you would as if it was built on the platform. So you'd have full platform functionality, but the models themselves, they're not hosted in Salesforce. So there's all kinds of things you can do with that. Your data science team can make sure that they have full control. Let's say they fine tune the LLM so it talks specifically in your brand language, for instance. That's a use case.

Josh: Gotcha.

Bobby: We want to give customers the ability to do this on the platform as well. So the same clicks, not code, we want to bring that to LLMs. That's a future thing. We want to give that capability.

Josh: I'm going to make a comparison here, and I'm going to be a little controversial to my artist friends who I've had these arguments with, but I know artists who have actually built their own LLM model based on their own art, and then they're treating these models as their little AI buddy to try different things very quickly and then kind of motivates them in a very specific direction. Is that a quality comparison to what you're seeing people are doing when building their own models?

Bobby: It's a good question. I don't know that that's... Well, I don't know. I think what we are seeing, so brand voice for sure is something that people want an LLM to do. They put sentences together really well, but if you are distributing anything to your customers, you want to make sure that the sentences that are generated are on point to how you would speak as if it was a regular person. So fine-tuning that with specific words and phrases, that's what we're starting to see some customers do with their own LLM. But we're also seeing that there's other techniques, retrieval augmented generation, or RAG. People call it RAG, which I feel like is a... I can't just say RAG without saying retrieval augmented generation to customers because I don't want to be looked at like I'm a crazy. But then also-

Josh: It is a sort of unfortunate acronym. You're correct, yeah.

Bobby: It is. But I guess it's getting common, so I'm correcting it... Or not correcting, I'm not saying the full thing as often. But we're finding that that is another approach to not having to train those models. I think research is still out on which is the most effective mechanism because you can say at the time that you want that LLM to process something, say, here's some examples. So you don't have to train because training an LLM is pretty expensive right now.

Josh: Yeah. Both from a quantity and processing point of view, right?

Bobby: That's correct.

Josh: Yeah. Take that one step further for me. How does RAG change the game a little bit?

Bobby: With the ability to quickly find some examples of things that you're looking for... Okay, let's say you are replying back to a customer for customer service, you want to automate it. So customer asks a question, and the LLM obviously can't really answer the question unless you provide it some information. So you could give it some knowledge right away. So first, find some similar cases, find the resolution of these cases, and summarize that and go back to the customer. So simply by searching for certain resolutions and responding back or summarizing those resolutions, you already have brand voice because those resolutions themselves we're assuming that was all typed in by someone who understands how you're supposed to respond back. And then, let's say the LLM responds back, it's already similar, and that gets recorded as the resolution. Now the next you're responding back it already sort of knew how to respond and the next time if you're searching for similar things, you'll probably get the same kind of response back. Did that make sense?

Josh: It did. It did actually. Yeah. It's like, as a fishing analogy, you're fishing in the sea that you already have. You're bringing in examples that have already been contextualized within your data and you're just like, "Go ahead and just start there." Does that sound accurate?

Bobby: Exactly. Yeah, that's exactly it. And then the other thing is, as you're responding back you could... Because when you're talking to an LLM, you have to generate this prompt. I know this isn't part of the subject here, but Prompt Builder is another great tool that's on top of Model Builder where you basically tell the LLM what you want do. You program that LLM, and you can insert the retrieval augmented generation wherever you want. It's like you're building an email template and you're just saying, "Hey, here's some similar cases." And then around that within the Prompt Builder you can say, "Here are some examples summarized like this."

Josh: Got it.

Bobby: So you're using this LLM as if it's an assistant that can go do something for you and you give it a bunch of instructions and you put that all in one place. It's pretty cool.

Josh: Yeah, no, it's okay. I still get another nickel if we say Prompt Builder, so it's a good advertisement.

Bobby: Perfect.

Josh: And on that note, so I'm thinking of the Prompt Builder interface and where you build out the prompt and then over on the right we've got like, "Here are the models you can use." So are we going to make that portion transparent to Model Builder and be like, "Oh, hey, my data science team created this specialized model based on our marketing, our brand, our voice. Please use that instead of, say, OpenAPI... or OpenAI 3.0 or something like that."

Bobby: Yep. So that's actually in there today. So if you're using Prompt Builder, when you look at the models, there's a drop-down. The default is going to be the standard model. There's a drop-down there, you can change that to custom models. Once you change that to custom models, it's any other thing that shows up in Model Builder. So this could just be, let's say, OpenAI 3.5 turbo and you've configured it slightly, you've changed the temperature, one of those parameters. We have a model playground that allows you to do that. Or it's a LLM that you brought in. So whether it's the ones that we have, GA today or the ones that are coming, it's a model that's your own and you have full control. So then that just shows up in Prompt Builder and you build the prompt. In the future, we're looking at how to, I don't know, give you more controls over which LLM should show up in Prompt Builder versus the ones that you don't want to have show up. So while today it's everything, we know that our customers want that finer granularity, so we're thinking about that.

Josh: Got it. Well, let's touch on those two points. What is availability for Model Builder looking like today?

Bobby: Model Builder, it's actually packaged with Data Cloud. So if you have Data Cloud... I didn't say buy Data Cloud because Data Cloud is now being packaged with many different things. It's a consumption-based pricing model, so this is new to a lot of our customers. But what's cool about doing a consumption-based model for pricing is this tool can just be there. We want Data Cloud to be an extension of the platform, just like you're building custom objects and things like that, we want Data Cloud to be as easy as that. It's just there for everyone. It's a tab called Einstein Studio within Data Cloud. That name may or may not change in the future, so just bear with me if it does. I know we're talking about Model Builder and we have a tab called Einstein Studio, and we like to say Einstein 1 Copilot Studio. I love marketing at Salesforce. It's fun because it changes and I'm like, "I got to just go with it." So Einstein Studio, it's packaged with Data Cloud. So you get Data Cloud, you go into the Data Cloud app, you find Einstein Studio. But it's just a tab. So just like you can find the Reports tab and any app that you want, you can put Einstein Studio in whatever app you want. So if you're an admin, it's just a tab, you'll find it. It's only there if you have Data Cloud turned on in your org, but that is currently how it's packaged. If that's the future, whether it changes, who knows?

Josh: Who knows? I do feel like if there's one thing our audience has learned if they've been in the Salesforce ecosystem for even half a second, is that all things might change. They might change their name, they might change their location, they might change their pricing. So if you're listening to this and you're interested, please check out your health documents or talk to your account executive. Speaking of things that might change, anything on the roadmap you want to give a shout-out to?

Bobby: Model Builder itself, I mean, there's lots of things we're doing with Model Builder just in this release. Actually here, this is really important, for all you admins out there, we are working as fast as we can to get features out. We are no longer on the Salesforce three-release cycle. We are going to be coming out with stuff on some monthly cycle. You're going to see that across all AI. You're going to see that across Data Cloud. We're coming out with things just on a different cycle, so please bear with us. I know how difficult it is even to keep up with our three releases, so just bear with us. We, in fact, have a release coming up very soon with Model Builder for some of the predictive AI stuff. We're making it easier so that you can build models with clicks even easier than you could before. I would say there's nothing earth-shattering there, but we're making it easier. You're going to see a lot more LLMs that you can bring. You're also going to see a lot more default LLMs, ones that are just shipped. We have a handful of models today from OpenAI and Azure OpenAI. You're going to start to see ones from other vendors as well. So they're just going to show up, everyone just has access to it.

Josh: Got it.

Bobby: And configuring those models within flows and prompts and all these things, it's just going to get a lot easier. So please bear with us. Keep up with the release notes because release notes are only three times a year. We're just updating release notes mid-release, which is weird.

Josh: Got it.

Bobby: Trust me, I know this is weird because I've been around a long time and I keep asking myself, "Should we be doing this?" And you know what? We're doing it, so here we are.

Josh: Not to panic anybody, it feels like a fundamental change that Salesforce might be evolving to in the long run. So everybody obviously can keep your eyes on admin.salesforce.com, and we will try to keep you in the loop as those changes make. And Bobby, do we have Trailhead content on this?

Bobby: Yes. In fact, we just came out with a Trailhead for Model Builder, just the predictive model piece. I think there's some coming for LLMs in the future, but just the predictive model piece that just shipped, so take a look.

Josh: Sounds great. Bobby, thank you so much for the great conversation and information. That was a lot of fun.

Bobby: Absolutely. Thanks for having me.

Josh: Once again, I want to thank Bobby for joining us and telling us all the great things about Model Builder. Now, if you want to learn more about Model Builder and of course Salesforce in general, head on over to admin.salesforce.com, where you can hear more of this show, and also, of course, our friend Trailhead for learning about the Salesforce platform. Once again, everybody, thank you for listening, and we'll talk to you next week.

  continue reading

153 つのエピソード

Усі епізоди

×
 
Loading …

プレーヤーFMへようこそ!

Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。

 

クイックリファレンスガイド