
Britney Muller is an AI consultant and keynote speaker advising tech companies on AI strategies, machine learning, and workflow automation. With over 10 years of experience in generative AI, she has developed over a dozen in-house AI applications. Britney was the former Marketing Manager at Hugging Face, where she launched the largest open-source, multilingual model.
Here’s a glimpse of what you’ll learn:
- [3:18] Why brand mentions are the new backlinks in AI-generated content
- [7:04] How large language models process and retrieve information differently from search engines
- [14:03] The manual tasks marketers should automate — and AI’s limitations in research
- [22:19] Transforming website pages into embedding vectors for SEO and content optimization
- [32:31] How to leverage AI to conduct content gap analysis
- [36:45] Britney Muller shares marketing insights from Reddit API
- [44:38] Creating a Chrome extension to automate AI-generated responses
- [47:37] Tips for identifying AI automation opportunities in daily workflows
- [53:55] The ethical challenges and risks of AI adoption in marketing
- [1:03:44] Britney's unique approach to time management using taper candles
- [1:09:22] What is the grandfather clock theory?
In this episode…
AI is changing the game for marketers, but many don’t leverage it to its fullest potential for their businesses. Rather than producing AI-driven content at a faster rate, companies should focus on building a strong brand presence and leveraging AI for intentional automation. How can marketers cut through the noise, avoid common pitfalls, and harness AI to drive measurable results?
While AI can be used for pattern recognition, automation, and audience research, AI optimizer Britney Muller warns against relying on it for fact-based decision-making. You can leverage AI without losing the critical human element by transforming website content into vector embeddings. Analyzing these embeddings allows marketers to identify content clusters, uncover gaps in their website’s information architecture, and optimize internal linking structures to improve search engine rankings. Britney also recommends utilizing Reddit APIs to extract real-time customer sentiments, uncover trending pain points, and analyze top-performing content in specific communities.
In this week’s episode of the Up Arrow Podcast, William Harris chats with AI consultant Britney Muller about practical AI strategies for marketers. Britney explains why brand mentions are the new backlinks, how to build AI-powered internal tools, and the ethical concerns marketers should consider when adopting AI.
Resources mentioned in this episode
- William Harris on LinkedIn
- Elumynt
- Britney Muller: Website | LinkedIn | X
- Actionable AI for Marketers course on Maven
- “The Biggest Spender on Meta Ads Believes ‘Data-Driven’ Is a Myth With Christian Limon” on the Up Arrow Podcast
- “Content Amplification: How To Create and Distribute Content That Gets Results With Ross Simmonds” on the Up Arrow Podcast
- Outliers: The Story of Success by Malcolm Gladwell
Quotable Moments
- "Brand mentions are the new backlinks. It works as a vote of confidence in AI search."
- "AI is incredible at pattern detection and sentiment analysis — something marketers often struggle to do manually."
- "You wouldn’t want to apply AI in high-risk applications like healthcare or medical diagnostics without oversight."
- "Taking imperfect action is so powerful; it stifles your feedback loop if you wait for perfection."
- "We have to be mindful of how AI is used in marketing; authenticity will always stand out."
Action Steps
- Leverage AI for audience research insights: Using AI tools like Reddit API can help uncover real-time consumer sentiment and trending topics. This allows marketers to craft campaigns that resonate with their target audience rather than relying on guesswork.
- Optimize website content with vector embeddings: Transforming web pages into vector embeddings helps identify content clusters and internal linking opportunities. This improves search engine rankings and ensures a better user experience by making information easier to find.
- Automate repetitive marketing tasks: AI can streamline data enrichment, email drafting, and customer service responses, freeing up time for strategy. This allows marketing teams to focus on high-value creative work instead of being bogged down by mundane tasks.
- Be cautious when using AI for research: Large language models are probability engines, not fact-based search tools, meaning their outputs can be inconsistent. Marketers should always verify AI-generated data before using it in decision-making to avoid misinformation.
- Maintain a human touch in AI-generated content: While AI can optimize workflows, it’s essential to add a layer of personalization and authenticity. Brands that balance automation with human creativity will stand out and build stronger connections with their audience.
Sponsor for this episode
This episode is brought to you by Elumynt. Elumynt is a performance-driven e-commerce marketing agency focused on finding the best opportunities for you to grow and scale your business.
Our paid search, social, and programmatic services have proven to increase traffic and ROAS, allowing you to make more money efficiently.
To learn more, visit www.elumynt.com.
Episode Transcript
Intro 0:03
Welcome to the Up Arrow Podcast with William Harris, featuring top business leaders sharing strategies and resources to get to the next level. Now let's get started with the show.
William Harris 0:16
Hey everyone. I'm William Harris. I'm the founder and CEO of Elumynt and the host of the Up Arrow Podcast, where I feature the best wines in e-commerce to help you scale from 10 million to 100 million and beyond, as you up arrow your business and your personal life. Today, I'm talking to Britney Muller. And when I was thinking about putting this together, I was thinking, you know, AI is changing everything, but let's be honest, most marketers have no idea how to actually use it in a way that moves the needle, and that's where Britney comes in. Britney is a force in the AI and marketing world. She's been a trusted advisor to tech giants like Amazon, the former marketing and PR lead at Hugging Face, and a pioneer in making advanced AI accessible, practical and insanely valuable for businesses. If you've learned from her SEO work, watched her legendary Whiteboard Fridays, or read her game changing llms, 101 guide. You already know. She's one of the most sought after experts in digital marketing and AI today, here's the thing, Britney doesn't just talk theory. She's about real actionable AI applications that give marketers a serious edge. Today we're diving into how to use AI to Automate, optimize and innovate without losing the human touch. She's going to break down exactly where AI fits into your marketing workflow, what most brands are getting completely wrong, and how you can leverage AI to work smarter, not harder. Here's what's got me the most excited. Britney is going to screen share live and show us exactly how to put this into action. So if you're a marketer, a founder, or anyone looking to unlock AI's real potential, this episode is a must listen. Britney, welcome to the show.
Britney Muller 1:48
Thank you so much for having me, William. What an amazing intro. I feel lucky to be here
William Harris 1:54
now. You well. I'm excited to have you when I was Ross Simmonds was on the show before and Ross, absolutely genius, you know, content marketing genius, and friend of mine. And he's like, You have to talk to Britney. She is the AI voice right now today that you need to have on the show. And I was like, Okay, done. And we talked, it was like, yes, 100% gotta have you on here.
Britney Muller 2:15
He's the best. He's the best. I love his new book. It's so yes, so good,
William Harris 2:21
yeah, yeah, that's a good shout out for Ross too. Yes, if you haven't listened to that episode, go back and then we actually get into his book a little bit in that episode as well. So yeah, shout back out to that. I want to dig into the good stuff here. Before we do, I want to announce our sponsor. This episode is brought to you by Elumynt. Elumynt is an award winning advertising agency optimizing e commerce campaigns around profit. In fact, we've helped 13 of our customers get acquired with one that sold for nearly 800,000,001 that IPO. You can learn more on our website@Elumynt.com which is spelled E, l, u, m, y, n t.com that said on to the good stuff. I want to start off by talking about the problem in AI and why most marketers are asking the wrong questions, because there are some common misconcept misconceptions about AI and marketing. I think everyone thinks that AI is a popularity contest. How do I show up in AI outputs? It's fine, that's good, but it's the wrong question. What's the real way to get consistent brand mentions in AI generated content?
Britney Muller 3:18
I like to put it like this, brand mentions are the new backlinks. So if you're familiar with the SEO world, backlinks, you know were, and still to some degree today, are sort of the currency of ranking and showing up online. But it's not so dissimilar to the real world, right? If you were to go to three random people in a new city, asking them where the best steak house is. If they all say the same thing, you're more likely to trust that, right and to go check it out. So it works very similarly as like a vote of confidence. And I feel like, William, you do this so beautifully in the podcast, like, as I was listening to some of your episodes, you your internal mention machine, naturally in your brain is on fire, like you're constantly referencing, like we've already referenced, Ross, you know, you're giving shout outs, and that's powerful, Right? Because listeners, we can't listen to a link, right? Like it doesn't work like that. And very much in the same way that we understand language and context, is how this new AI technology and large language models is understanding it as well. So it's much more kind of that, natural language context, semantic space, which will be at the advantage of marketers who can not only get, you know placements, qualified placements, where people are talking about their product or their services online, but what are the creative ways that you can really empower your. Customers to talk about you for you, right? And so that's kind of the next level that I've been thinking a lot about lately, that brands doing that right will will be showing up more and more in AI search. And LLM
William Harris 5:17
I love that whenever I thought about link building. Back in the day of, you know, the heyday of link building and SEO, the way that I would explain it to clients would be, there's plethora of answers, and then there's relevancy of answers, and then there's, you know, maybe the value of answers. And so like, like, let's just say plethora of answers. Again, if you interview 100 people and ask them where the best steak house is. And you know, none of them. You don't know any of these people, but they all say the same thing that carries a lot of weight. You're like, okay, there's just a significant number of answers pointing me to this. I don't know if these people are trustworthy or not, but that has something to do with it. But then the flip side of that is, if you ask somebody who you know, who happens to be like a stake reviewer. You know, they're your friend, and you say, What's the best? They give you an answer. It's only one answer, but you like but that answer carries a lot of weight, maybe enough to challenge the 100 random answers that you have. You're like, I don't know, but you know, they said that this is the best one. So I'm maybe inclined to believe that. And then there's the relevancy, right? And so relevancy just being that it's like, Hey, if you asked people you know, who you know are in the state industry, and maybe they're not, like, steak kind of stores, but maybe they sell the meat to the restaurants, and you're just like, hey, well, okay, they're in the industry. There's some relevancy here. Tells me that they have some idea of what makes a good steak, versus somebody who has no idea what makes a good steak. And so, like, all of those play into it. Do you know if AI is at the level that it's looking at things that way, contextually, beyond just like, hey, I did a quick search, and here are the ones that popped up first at Google, and so I went with it, and so Google's done the research now to suggest that these are the ones of the most, most reputable. Is AI at that point, or is it just piggybacking off of work that's already being done?
Britney Muller 7:04
So to step back for a second, maybe just quickly define, like the AI that we're referring to as those large language models, right? So and correct me if I'm if I'm wrong here, but like the chat, GBT, right? And Claude, yes. So those particular that particular technology, their probability engines. They're continuation machines. And based on everything they've read on the entire internet, they are aggregating basically the average of everything. So to your point, it's really good at, you know, summarizing massive volumes of information about a particular topic or, you know, areas or locations, but it's likely outdated, right? So your point about like this survey of 100 people that might change year by year, right? And llms often take about a year to train the really large ones, and so they're about a year behind. There are hybrid model functionality that they deploy to, like pull in some real time search results. But it's not the same thing as refreshing that survey that you just mentioned on a regular basis. It is kind of built on a little bit of the antiquated information with a sprinkle of the real time relevance that you're referring to that's
William Harris 8:28
interesting. I hadn't thought about it that way, but I like that. And so there's maybe some some negativity towards that, but there's some positivity in the idea that it's like, I don't know if you feel this way. When I run a search through chat, GPT or Claude, I still sometimes feel like I'm like, I overall like this. I like it better than what I was maybe getting if I was just turning up a basic search on Google or anything like that.
Britney Muller 8:51
Totally, totally because the ability to have so much more long tail queries and communicate via natural language is completely different, right? If I have a really specific question, like I had this weird question about Minnesota, like out of state licenses that are expired, and I know I can find the information online, and I did verify it, but I wanted it explained to me in a way that was just like quick and easy to digest in the moment. And I knew I wasn't going to get that from search, right. I knew that an LLM could get me really close, if not some of that information right away.
William Harris 9:33
That makes sense. I like that. Yeah. A lot of people will say, you know, hey, I can do everything. No, that's not true, but what are surprising areas where it is really good at doing things?
Britney Muller 9:46
Yeah, it's incredible at taking large volumes of information and pulling insights. So its ability to pattern match, pattern detect and. Ways that you know you and I and anyone, we couldn't do that manually. And so another thing that's been on my mind a lot lately is audience research insights. Right? We're told as marketers and as founders that we need to do all of this market research and market audience fit, but the amount of data collected in order to do that is so massive, oftentimes that insights and patterns get lost because we're not able to see that. And so using something like a large language model to pull out sentiment real time, sentiment on product features, right? I just did this for a workshop I did where I invented a make believe water bottle. And I wanted to know real time. Let's get a temperature check on the the water bottle features that people really like right now and really dislike. And we were able to do that really quickly using the Reddit API. And so you can do dig deeper and like, do that with your brand and your competitors, right? Those kinds of insights. What kind of language is my audience using to talk about this stuff? What top performing post types do best, right? Is it humor? Is it comparison? Is it seeking help or questions? What is it? And then just sort of, you know, the ability to get that real time temperature check on topics on industry, it's never been easier. And I think if, also, if you're a little bit technically inclined, there's no excuse now, because we have Claude Sonne, 3.5 3.7 that is leaps and bounds the best programming assistant we've ever seen. And this is where, like this new generation of vibe coding, people are starting to talk about where you can never before have we been able to just like program based off of vibes. It's such a silly term, but it kind of adds texture to what I'm talking about, that you're able to just really, really quickly make changes and add functionality and progress applications and tools and websites like never before. So those are my favorite applications. I
William Harris 12:21
love that you called out the Reddit API for even what you were using on that. Because when we do a lot of customer research manually, like you said, we're talking about, like, how do we want to position this brand that we just brought on in an ad? How do we want to make sure that, like, what are the things that we're saying? Yeah, oh, we use Reddit all the time because it's one of the best ways to go. It's like, I don't know, a zipper, right? And it's like, okay, well, what do people not like about your competitors? Oh, they're talking about how the zipper absolutely is terrible on your competitors. Great. Let's talk about how your zippers are absolutely amazing, right? It's like, if it's a thing that's being pulled to the top of this, what are they talking about within your competitors, ads or their organic posts? Okay, great. I'm gonna pull that out. It's like. So what are people complaining about? How can we use that to our advantage? To say, well, we're not like that. Here's the things that we're really good at. You don't even have to say we're not like this other person, but you just simply saying, cut the best zippers. And if you're somebody who's like, I hate that. They have terrible zippers, you're now all of a sudden intrigued you like, Okay, this has been a problem of mine from a zippers or a dummy data, but you get where I'm going with this. Yes,
Britney Muller 13:19
yes, absolutely. Being able to surface those concerns just so immediately and quickly is powerful, right? It's actionable. You can immediately take that, use it in, whether it be marketing, advertising, copy, you know, website copy, email blast. You can turn it around so, so quickly, and that's what I'm seeing some of like the next level marketing teams doing is deploying really tailored tools that are delivering those insights like that on a regular basis.
William Harris 13:52
So besides that, what are some other things that marketers are wasting time on doing? Let's say manually, that you're like, you should, you should be using AI for this. Oh,
Britney Muller 14:03
yeah, that's a good one. I'm starting to think that, let's see. You know, it's it's tricky, because there are so many applications, and some are far better than others, but more often than not, the AI piece of it is actually plays a bit of a small role, right? So it might just be like data enrich, enrichment in a particular part of a process for reporting. It could be automating, you know, a report summary through Google Analytics, right? Using Google Data Studio, piping in an LLM API, and just summarizing this month's traffic over last year, you know, doing that sort of thing, I think it's great for kind of cleaning up copy, whether it be like being quicker at turn. Around emails, customer support outreach is phenomenal. Like, the ability to basically start with, like something rather than a completely blank slate, is really, really good. I, you know, I always steer people away from using it for research. And, you know, I'm still on a weekly basis seeing people trust llms to do like PPC research and keyword research. And I'm like, okay, run it again. Run that same prompt again, and it's going to be different results. It's going to be different numbers. And the reason that is, is it's, it's not a research engine, right? It's also not a truth or a fact engine. It's literally developed to model and mimic the laws of language and reflect everything it's ever read online, and so we have to rely on, you know, research best practices, finding like the true, authentic data sources that we can then use in a in a workflow that maybe uses AI to do some sort of assistive support with that data. But it shouldn't be the first place you go to surface some of those results. And the last thing I'll add to that is it's important to note that the reason why these models sound so human like is because one tiny piece of functionality that was added semi recently to large language models of this randomness, so adding a degree of randomness to that probability output. So again, it's, it's, it's outputting the average of everything it's seen, but there's a sprinkle of randomness to make it sound interesting and human, like when it's when there's no randomness involved, or that it's a temperature settings cranked way down. It gets really robotic. It gets cold and repetitive sometimes, like at Hugging Face, we would test models, it would just repeat the same sentence, and we're like, oh shit, that's not good. And so a degree of randomness gets it out of that sort of predictive loop, but it causes issues, right? Because it's not so control, because it's not deterministic, and based on fact it is going to have errors, and because it's a continuation engine, if it produces an error, it can oftentimes compound and double down on that, and it and people get in trouble because it sounds trustworthy, it sounds confident and it's been reliable in the past, right? So why would it fail me now? And there's a group of researchers that are coming out and sort of waving a bit of a warning flag of the more trustworthy these systems become, the more dangerous they are, because we're less likely right to double check some of the outputs, or put something into production that was an error, or is no longer, you know, a law, or this or that so super important to be aware of that
William Harris 18:11
it's a really good call out. I do, like, you know, using it sometimes for some mild analytics work. I don't use it for anything significant. But to your point, that is the thing that I even when I do run it through, I'm just like, Ah, just upload this spreadsheet. Just see what it says, right? Like, things that don't matter. And I'll give you an example. There's a basketball league that I'm in where it's like, they're using three points count as three points, but regular baskets count as one point. And it's really dumb, and I don't like this, right? So it's a 200% difference versus the 50% difference. So I wanted to know statistics. Statistically speaking, is it even possible for somebody to win if they aren't just shooting three points the whole time, right? And so this is really, yes, yeah. I love that you're laughing at my nerdiness. But the reality is, it's like, this doesn't it's inconsequential. It doesn't matter if it's wrong. But I came up with, you know, some really interesting math that basically suggests, just go down there, shoot the three. You don't even need to worry about a rebound anymore. Rebound anymore. Like, just shoot it. If you make it, you're good. It's only you have, like, a 33% chance of making the shot. You're pretty much guaranteed. And so it's very interesting. But the thing is, when you try to work back that math, let's say that you're doing this with a client, and you're doing this with something that is of consequence to your point, that it can compound. And I've run this a couple of times, just test exactly what you're saying. And you see, you're like, that's not the right answer. I know it's not the right answer, but it gave me a very convincing answer. And if I didn't know that that wasn't the right answer, I'd be very inclined to believe it. And one of the problems then is you can't work backwards to understand how it got to that math, and so you don't know where it broke down. It's like turning in your math, you know, without doing the work. And you're just like, the teacher can't even help you and say, here's where you're wrong, because you're like, I you're like, I don't know where you're wrong, so I just have to abandon it, because I can't even help correct where you went wrong.
Britney Muller 19:48
Yes, exactly, exactly. And what's interesting is the underlying technology doesn't reference where it's getting these things because it doesn't have that information. It's not information retrieval like Google. Goal where we have an index of sites and pages that are then, you know, organized by relevance and authority, and, you know, populated in a way that hopefully serves the user the best. This is really just a giant, ugly amalgamation of everything it's ever seen on statistics for this topic, and hopefully it's it's come across enough similar questions like yours to produce an output that looks right, or, you know, close. But when given entirely new questions, entirely new problems it's never seen before, it does terrible, right? It has nothing to lean on. It has no previous awareness of those things, but yeah, on the back end, it's just this giant neural network of essentially tokens that have established relationships to one another. And so it's an interesting way to think about how different it is from Google search. And what's scary is it looks and feels like a search engine, you know, you go to it and it's, you know, you have a box, and you put text in and you get text out, but it's the functionality is so incredibly different, yeah,
William Harris 21:17
so there are things that could do well, there are things that it can't do. Well, we're hopefully making some some good illustrations for people to know the differences of what you should and shouldn't be relying on. But let's get into the things that it can do. Well, a little bit deeper, I'm gonna put you on the spot and have your screen share a couple of things, because there are some really interesting things that you've been doing. You teach this as a class. This is a class on Maven, I think, right? Yeah. And so we're gonna dig, dive a little bit into this. We're not gonna give away everything. You still gotta go take the class, but there's a couple of things that you're doing that I was like, oh, I want to show some people, at least some glimpses of some of the ways that they can use AI in very practical ways. Um, the first one that I wanted to dig into was transforming website pages into vectors. And I'll let you explain why this is helpful. But like what I hear why this is helpful, it can allow it to what better understand the website, especially from, like, a design perspective and things like that. So that way you can come up with a little bit more of a better output. And it's not necessarily having to crawl a website or things like that, but you correct me, because I don't know this as deeply as you do. I don't use the vectors yet, yeah,
Britney Muller 22:19
oh my gosh. I am excited to put you on to this because it's literally, it's so fun. It's not only like easy, but it's fun, it's exciting, it also it's so powerful, because this is exactly how large language models consume information and organize information. And so what's funny is, like we our hardware has not evolved, right? We're still processing this on, you know, silicon computer chips, ones and zeros. These are all still ones and zeros. What the embedding vectors do is it basically pulls all the text from any given web page and it feeds it through a GPT open AI model that is able to evaluate the text, evaluate the context, and apply a vector embedding, which is just a giant number matrix, if you remember, I Think algebra right, you would do matrix functions, and you know, you would add them together or times them. So the fascinating thing about these vector embeddings is we don't necessarily know what what each and every individual number is referencing or related to, but it holds rich context. And so, for example, if you take the vector embedding of King, minus the vector embedding for the word man, and add the vector embedding for woman. Mathematically, it's almost identical to the embedding for Queen, and so it's fine statistical structure to language in order to do these pretty incredible things that we see llms doing today. But that all starts with, you know, taking text and making it numerical, you know, in a context space that this neural network is able to do. And so we use this embeddings model from open AI to do that. So let me. So first I will say I do it on Screaming Frog. And there's a whole workflow on how to do that. Let me actually. I'll share my screen here. So one of my previous students, Everet, did this amazing workflow of internal link opportunities, and part of that initial process is getting the vector embeddings for every single URL. So this is kind of the article. He takes you through step by step, and he produced this within my course, which I'm so proud of, and I've since helped him automate a lot of this, so it's no longer this many steps. We have different parts of this that you can kind of do all at once with the Google collab notebook. But essentially, what you do first is you'll literally just go to Screaming Frog and enable the chat GPT extract embeddings from page content. You can do this on other tools as well. I think Screaming Frog is a lot of our you know, old school SEO Go to Totally
William Harris 25:40
I'm wondering. I'm wondering, like, how what the overlap is of people that are going to watch this and know exactly what Screaming Frog is, versus those are, like, what's Screaming Frog? Yeah,
Britney Muller 25:49
it's amazing. It's literally amazing. If I didn't think it would crash my screen share, I would pull it up. Um, it's so amazing for stuff like this. But what's cool is you can do a workflow like this, get internal link opportunities. But I mean, at just even this step where you're exporting the table of vector embeddings of all of these URLs, you can use something like this, embedding analyzer, collab notebook that I built for my class. Let me zoom out a little bit. So first and foremost, the code looks super intimidating. This scares everyone at first, and I am constantly pushing people to just run it right. It's a collab notebook. You can't break anything. Have fun with this. My students will get my APIs actually, and they will plug it in and and start to play around with some of this stuff. And so here's an example of a website we analyzed in the last course where it starts to cluster web pages based on their content embeddings. And so it analyzes all of the screaming frogs site embeddings, and it starts producing some really amazing information about this. So let me get to the cool stuff. So you can start to create content clusters with labels. So this particular site swag drop com, they have a ton of URLs in the corporate gift space, and then we see swag products over here, and promotional products in the green. What you can also do, and in visualizing this is step one. We take this to a whole nother level when we connect Google Search Console, because then you can take a graph like this, and the size of the nodes can now represent traffic. It can represent um ranking. So you start to get such a more richer picture of where your site is getting value. What might be the gaps? You know, you have all these pages over here that are getting lots of love that you're neglecting, or maybe you could expand on. It's super, interesting. And then I love to pull in Claude and have it suggest other content opportunities based on all of these vector embeddings. So now we start to kind of play around with this and give some more actionable results on what you can do with this data. This is me, probably just playing around with different ways to do this. It's really easy, once you have a notebook that does something like this, to save this, come back to it and just quickly run it in a meeting or for a client and export it to a CSV for them to take with them, right? It's really about empowering you, to empower the client, or, you know, your team, to make smarter and more action driven decisions.
William Harris 28:50
Yeah, well, I think that's a big thing. You're in a corporate boardroom, and you say something and it sounds too technical, and now you can't get the buy in that you need, whereas if you could just take this and make this as visual as you have it, it's very easy for them to say, Oh, well, yeah, that makes sense. You just need to do this. And you like, Well, I knew that was the answer that we needed to do, but I needed to get you on board with this so I could get the budget and resources I needed to do it. This makes that a lot easier. How, how good do you feel? Like, the recommendations were that you got, like, when you said, you plug this in the cloud, then too You're like, great, you know, based on this, here's some recommendations. Did you feel like those recommendations are like, these are good. I would actually maybe run some of
Britney Muller 29:30
these. So some are better than others. And it's funny, like, some sites that we throw into this notebook will yield better results than others, just maybe based on like the quality of text or the volume of web pages. It's gonna this is gonna look differently for every site, but I think it's quite good. And the crazy thing is, you can really take this information and refine it to a degree that is super tailored for your client. So maybe you don't. Care about, like, the suggested content opportunities, but you want to identify something real niche in here, you can literally go over to Claude and request that based on this notebook, or, you know, give it some information about what you're trying to do, and just a glimpse of, like, example, embeddings that you've grabbed. It can do phenomenal things. And like people that don't program or have any experience in this i in Cl in the class that I teach, I have them all go to Claude and I'm like, we're gonna build something within the first 10 seconds of this class. Like you can literally come to Claude and let's do, um, build me a Pomodoro timer that's Minnesota themed, and we'll see how it does. Like, the power of llms, isn't it's insane. Like, especially this particular Claude Sonne model it. It is amazing. Like, no, I used to buy programmers lunch or beer all the time because I so desperately wanted to know answers to different programming problems I was running into that Stack Overflow wasn't helping me with, and now I have, like, unlimited question opportunities with Claude sonnet. I feel like it literally has changed my life. It has powered so much of my work focus like the North Star. Oh my gosh, it works. Look at us, land of 10,000 timers. This is a working Pomodoro timer, right? So a very simple prompt, from a very simple prompt, you could literally use it just in your window here, like this, or you could deploy this code somewhere, right, and have it on a website, or whatever you want. Um, so, like this, is the power of llms. It's unbelievable.
William Harris 32:03
If you happen to know a programmer out there, buy them some lunch, because they're likely anemic now, because Britain has stopped feeding them, so they need food. I have Okay, so take me back to the other screen there. Could you use this? And are you using this then for content gap analysis? Because this is all within one website, right? But it's like, I could imagine this same visualization being very appropriate if you're looking at like, Okay, show me my website versus these three competitors.
Britney Muller 32:31
Yeah. So there's so many different ways to do content gap analysis. I have students that really love this approach, because it's quite simple and low lift and already set up to do some of that, but the ability to analyze the actual search result pages themselves, crawl the ranking pages, crawl the competing pages, and do content gap analysis, in my opinion, yields the best results, right? Because that helps us be more competitive in search, but also just with what you know people are finding interesting, most likely so that, I think that's kind of the next level of this, but you could also use that to enrich this view. Yeah, and I think there's just so much power in getting comfortable in Google colab notebooks and analyzing data sets in a way that's data science focused. Pandas is really how the language that got me into all of this. It's pretty straightforward, and your ability to just, you know, create these notebooks, I have so many, that are for Google Search Console exports that immediately create, like custom click through rate graphs based on your branded and non branded keywords. How has that changed over time? What are we looking at right? Why are we getting higher click through rates when we rank number two for branded keywords the number one, what might that tell us about some of our strategy? Or how can we use that to forecast traffic. Moving forward, forecasting is another whole area I would love to get into, but we probably don't have time. There's so many great ways to deploy time series forecasting models to predict traffic that observes national holidays, seasonal trends based off three years worth of data. You know it, it knows weekends and this and that. I mean, it's so powerful and helps goal setting. It helps kind of set clients up for expectations and success. Um, there's so many things you can do with this tech. See,
William Harris 34:35
that's so good, because when we run geo holdout tests, one of the things that I like to look at from a forecasting perspective, when running the holdout test is when you're picking the right let's just say zip codes or whatever. To analyze, there are things that sometimes can't be immediately evident in the data, unless you're if you're just going off of basic rules without that. And one example that I would give is, let's say Sung. Glasses. Sunglasses might do very well during certain parts of the time, all the time, parts of the year in California, but in Minnesota, where you and I are at like, there's definitely some peaks. It's it's gonna be like, when summer starts coming out and the sun gets bright, and when winter gets out and the snow is out there, and you walk outside and it's absolutely blinding. And you know, people wouldn't think that about min that about Minnesota. And so you might realize that, oh, this zip code is actually expected to have an increase in sunglass sales of, you know, 100% during this period of time. Therefore, I need to exclude that from my holdout test, because it's going to completely change this significantly, right? So, yeah, things like that, like the forecasting, using the data, I think is really good way to use this.
Britney Muller 35:45
It's so cool. And the the most popular time series forecasting model, in my opinion, is always been for the last several years profit. It's a open source, yeah, model by Facebook, yes, by meta. And what's funny too, is like, to your point, what you just said about, like the sunglasses thing, I think so much of the those insights lie not only in the forecast itself, but like in the actual graphs of those seasonal trends like it has helped so many of my clients identify kind of like strange times of the year that they tend to do well, what holidays they do well on, versus others. And it's drastically different. And it's different, you know, country to country as well. It's very, very interesting insights. I love that,
William Harris 36:35
all right. So this was a fun one. I want to get into you were using the Reddit API for some really interesting insights too. And we hinted at that one before, but let's look at this one. Oh, my God,
Britney Muller 36:46
I love the Reddit API. Okay, so I put together this free webinar. It's available on Maven. The recordings up free, available for anyone, but I helped people basically build out a Reddit intelligence tool within an hour. And so I walk you through all the steps of like, how to find your Reddit API keys, because they do kind of make it hard to find, like that. I honestly feel like that's kind of the trickiest part. And then once you get your key, I kind of show you, like your client ID, where that is in Reddit and how you put it here, your client secret, you put that there. And then to, you know, I try to set everyone up for success with these notebooks so that you literally just have to come in and do the bare minimum right of replacing those keys. And then, right now, what it's doing is, it's just testing. Are we able to receive the data from Reddit, right? So I found this Reddit post asking, like, best water bottle out right now, are we able to run this and get the comments? And we are right. We see the username, we see the comment, we see the up votes, and so now we're sort of cooking. What can we do with this information? And so it's quite long. I should have clipped it, but now we can start to surface insights, right? And again, like this is such a cool starting point, because you could literally come into this notebook of mine, copy all of this, throw it into Claude and say, Hey, I don't want those insights. I want these insights, right? And start to work with llms to get real specific on let's see. What am I doing here? Yeah, I search Reddit for a specific topic, and then I print the analysis basic numbers, sentiment, most active sub reddit, most active users. So here for water bottle, we see that we have grabbed 50 posts. You can expand on this. I was doing a demo, so I try to keep it low, but you can crank that up, and you immediately get these are the most active sub Reddits talking about water bottle, and then these are the most active users. And so from here, the world is your oyster. I kind of get a little bit crazier, and I hope this is still saved, but I pull in like sentiment analysis, busiest times. So for these other sub Reddits, right? What's the most active time to post on? What are the most popular topics? Oh, and I think I changed this query from water bottle to artificial intelligence, and we see psychedelics. People that talk about artificial intelligence are talking about psychedelics and consciousness. Good Lord, right? You know, like you just start to get a glimpse into again real time conversation. Let's see if this populated, but yeah, I mean, it just goes on and on. You can start to surface all of the top performing threads. The post type, so I have it analyze, you know, what kind of posts appear to be performing best for this particular product or topic. You know, how can my clients capitalize on, you know, getting in there during the best posting times, or when these subreddits are the most active? What's the most active day of the week, right? What are some of the top themes? You can surface all of this just within a collab notebook using the Reddit API.
William Harris 40:28
Yeah, that's amazing. And I think a lot of people are starting to re find Reddit. I don't know if you've noticed this, but there's been more talk in the marketing space going back to Reddit again this year that I've seen in a while. It has its waves where people are like marketers, I should say marketers. Marketers like we love Reddit. We hate Reddit. We love and right now, I think from what I've understand is a lot of the llms are pulling a lot of information from Reddit anyways. And so if they're saying, Great, we're going to source answers, they're sourcing a lot of information from Reddit for whatever reason. And so I think that if you can rank in Reddit, then there is the potential that you're ranking a little bit better than within just being an answer within, you know, chat, GPT or whatever. So I've seen a lot of people that are trying to hack Reddit again with the hope that helps them hack llms.
Britney Muller 41:16
Yeah, it's, it's really interesting, because Reddit and other popular forum sites supplement what AI can never do. They provide real world experience. They provide real opinions. And you know, product experiences, stories that AI has no friends, right? It has no ground truth. It doesn't live in this world. It's like an alien on a different universe that doesn't even have gravity, like it has no idea what the actual real world is like. It's literally just kind of consumed all of the text and is parroting that information back to us. So because of this, this is why we've seen all the AI companies gravitate and make deals with sites like Reddit, because they know that people seek that right, like, We naturally want other people's thoughts and opinions on like, what's the buy it for life, water bottle, right? What's Has anyone had this weird experience with this product or service? Ai can't touch that. And so we're seeing Reddit and other forum sites perform better and better, and AI is, you know, looping that in to their systems in a way that allows them to, again, kind of supplement what they completely lack.
William Harris 42:44
Yeah, you you talk about how AI doesn't have any friends, and it reminded me of something that Christian Limon, who was on here, he was over growth market at wish, and to be like He led them, did some crazy things. Gemini, if you're into the crypto space as well, and he was talking about something that he ran through, I think it was chat GPT, one of them, and it gave him the wrong answer. And he was like, you know, that's the wrong answer. It's like, Oh, I'm sorry. Let me help you. He was like, No, I want you to internalize it like you have wasted my time. To your point, it doesn't have friends like you said. It's like, we become its eyes and ears. We become its ability to understand the real world through these contexts, and this is one of its best way to source a lot of very interesting real world, thoughts, feelings, emotions. What does it taste like? What does it smell like? What does it look like? How does it feel in my hands? Is it heavy? Is it light? All of these different things is starting to add that, like very human context to this.
Britney Muller 43:38
Yes, exactly, exactly. It's important to reinforce that it gets anthropomorphized so often that, you know, people really start to, like, wonder about these things, but it's literally ones and zeros. This technology has no emotion, right? Like, Oh, that, that conversation and, like, getting to some of the ethics stuff really drives me nuts that people are being led to believe these things. Because it's a, it's a, like a controlling, manipulated, manipulative tactic to, you know, attract investors and customers and, oh, we have this god like technology. So, you know, you need to regulate us, but not like that, and it's bullshit, like it's just bullshit,
William Harris 44:26
yeah, yeah, yeah, Britney, there's another one that I would love you to share, but I know it's on a different computer, so you can't share it, but I at least want you to just tell people like a really fun thing that you're doing with Harry Potter. Yeah.
Britney Muller 44:38
So one of the things that I talk about a lot in in the course with my students, is I believe I have really strong conviction that this technology will be more and more integrated into the platforms we know and love today, right? Instead of going. To cloud instead of going to chat GBT like it really should show up within Google Sheets, it should show and there are tons of ways to do that, and one of the and then there's all the privacy concerns and this and that. And one of the ways that I've discovered to do that and deploy it quickly and safely is through Chrome Custom extensions. It's actually really, really easy to build out a Chrome Custom extension. I honestly, I use Claude to do this. So I went to Claude one day and I asked about, let's build out this custom Chrome extension that replies to hotel reviews in the voice of, you know, Harry Potter has a Harry Potter Potter style of answering and responding to reviews. And so I built it out. And it even like looks magical. It has the wand, and like the different magic emojis. And so when I go to my extension settings, and again, it's local, so it's unfortunately not on this computer, but a different computer, and I upload it, I can click on that and show students in Google Maps. You know what it would look like to quickly draft up something like that, to respond to reviews and customers, and it's just so fun, and so I'm seeing lots of people take that into Gmail, right? Lots of non native English speakers can use that to support them in you know, their email and responses and check different things. It's helpful for all sorts of capabilities and applications, and it's so fast, it's the and the part that I like is it's not fancy, right? It's so scrappy and lightweight. And I've been so impressed with Chrome's capabilities to do updates on those extensions. It's actually really, really simple to to do some of this stuff and again, not get too intimidated with the programming side of things.
William Harris 47:06
I think that's huge. Yeah, the idea of not being able to make the updates and not get too intimidated on that programming side of things, these are some really fun examples that we've gone through. There's a lot more that people can be doing in the marketing space, in the E commerce space, what are some ways to spot AI opportunities? I understand there's a case study of a student who knew AI could help his job, but had no idea how. Like, what did he What did he end up doing? Like, how? How do you find these opportunities? Yeah,
Britney Muller 47:37
I think the first part is really identifying the tasks that you do regularly or that you hate to do. And then I sort of take students through this workflow of, you know, really starting to outline what those tasks are, what those tasks look like, and breaking it down step by step, as if you had to hand that paper over to someone in another room to complete, right? Getting really specific will help us identify what parts of the process like this Reddit Insights tool, do we need to pull in our own data, right and use an API to pull in for that step, and then these following steps, we can use AI to surface some insights like this, right? What step of the process are we pulling AI in? How can we make sure that you know we're being safe in those applications? So identifying risk, right? You wouldn't want to apply some of this in high risk applications like health care or medical diagnostics, things like that, or even for marketers like, I think, of reputation risks. You know, there should be a human in the loop of these processes that oversees quality control, that adds personalization. And, you know, world class writing and personality to whatever it is that you're trying to automate. But I think another thing that is a bit undervalued, in my opinion, is seeing examples in the wild, right? That's where I get lots of my inspiration is I will see people using AI in a weird way for this industry, but I will think, oh my gosh, what if I tweaked that to apply it to this client over here, right? What if I modified this and had, like this parallel application that does it differently? And so I think it's important to see lots and lots of examples and so one another screen share here. I do take students through this sheet where I have them pull in GPT for sheets. It's a Google Sheets Chrome extension. It's amazing. And so I have them all fire that up, and then within each of these. Abzull Go into breakout rooms. So one group will do sentiment analysis, where you see all of the these, I am BD movie reviews for Roadhouse, and they have to work in a team to write a prompt like this. What's the sentiment of the following movie review. And then I'll just put a two. And then you can, you know, let it load and play around with the prompt, um. And so, oh, that's the reviewer. I'm in the wrong box here, c2 but like, this is the, you know, problem solving that they do as a team and start to play around with. And is it providing, like, the output that they want? And oftentimes it starts like this, right? It's not positive, neutral or negative. It's this long winded, whatever. And so you got to come in here and say, only out, output, positive, neutral, negative, and so you just you, you're tackling this problem as a team. And the path through AI applications is failure. It is 100% failing. It is not getting the prompt right on the first time. No one gets the prompt right on the first time. That is part of the process and something to get comfortable, really comfortable with, because that truly like lights the path. Another tab for summarization, these long technical articles about AI. I have a T, you know, a group of students summarize those. I have a group categorize these product descriptions. So what are the product categories? Right? Llms have seen all of the E commerce sites in the world. They're really good at identifying product category data enrichment, right? They know a lot about the fortune 500 companies, they can add a column of what industries they fall within, language translation. Incredible at translating language into other languages, customer service, support. You know, Excel formulas. I never want to write an advanced Excel formula again, for as long as I live, because I can just take this stuff in, put it into Claude and say, you know, I want to extract the website of these email addresses, right? What's the formula for this? And I have an LLM do that for me,
William Harris 52:37
which is good, but I'm pretty proud of some of the Excel formulas I wrote, yeah, but it is good. It is better, but, oh, I kind of liked it with that part, yeah,
Britney Muller 52:45
I know. I mean, if you're good at it, it is so slick, like, I love whipping up, like, ones I'm comfortable with. But when it's more like a reg X, I'm just I can't, like, I can't do it, yeah? But then there's, like, these kind of mean workflows I put poor students through that are really setting them up for failure, where I have them put in, like really random elementary schools that they attended, and I try to have the The LLM predict what the mascot was for this school, and you see it most of the time. It doesn't do that well, or it's weird or old, outdated information. And so again, this research, part of it, it's important to play around with it and to really experiment on a topic that you're well versed in, test it out right? See how well it does on some of that, those things,
William Harris 53:41
that's a really good call out. Yeah, I want to talk about some of the obstacles then too, because we've gotten in a lot of the practicals. But what are some of the obstacles, why adoption for AI can fail sometimes,
Britney Muller 53:55
oh my gosh, so many reasons. It's so funny. I'm I literally just read the paper fully autonomous. AI agents should not be developed. I love that
William Harris 54:05
when you say you read the paper, you actually read the paper, the physical printed out paper, which, by the way, I print out my show notes. I'm not above it.
Britney Muller 54:14
Are you refilling your ink cartridges yet? Because that's so rewarding. I don't know why
William Harris 54:20
I subscribe to it, so it just automatically sends it to me. Whenever it detects it as well. It's like, oh, you need new stuff. Oh, nice. I interrupted you. Okay, so you read this you read this paper? Yeah,
Britney Muller 54:32
there's a lot of things that we're getting wrong. First and foremost, these systems because of the way that they're being improved through what's called rlhf reinforcement learning from human feedback. It's reinforcing our human biases onto ourselves, and it they really do. These systems magnify all of societal i. Biases and issues and guard rails are like barely holding it back from just spewing toxic hate speech, to be honest, sure. And so what's scary and especially in like a marketing sense, is some of these things slip through the cracks, right? And I start to really worry about, like, reputation, different issues around, like harm and using people's like personally identifiable information images, right? Like the image generators notoriously hyper sexualized women, like, it's unbelievable. And I see different examples that are really, you know, on the surface, it appears innocuous, like the PGA Instagram account. They did this post couple years ago, maybe two years ago, on player headshots that they expanded with Adobe Firefly, an AI image generator that could do some really powerful things. And at first, you know, you're looking at some of these top players, like Rory and, you know, it's just kind of funny and interesting. But then you get to, you know, players like Colin Morikawa, and he's, he, you know, he's of Asian descent, and he's working on a wooden item. It looks like he's in a some sort of sweatshop. And then you get to who was the the other one. There's another player that looked like he was in like a dump. It literally like the ground wasn't even like a studio setting. It was in a bunch of dirt. And I can imagine the marketers and, you know, the social media team sitting in a room and just thinking it's funny. And all it takes, it doesn't take, like, technical background on this stuff, but it takes one person in that room to be like, Oh, this might be kind of fucked up for ways we're not realizing just yet, right? Like, why is it doing this to all the players of color? But the white players get, like, the regular, you know, maybe funny poses, but they get the full background that fits their headshot. And the players of color are getting these strange elements that come into focus. And what's scary is we're putting this stuff out into the world at such a fast pace that how is this cheap? How is this going to affect, like, children's world views, you know, like, I start to worry about how this starts to leak out in different nefarious ways that maybe we don't even realize right now. And I think, again, as marketers, we have to be really mindful about how we're using this. I feel real strange seeing so many AI generated images on marketing websites. I think there's going to be a rise in authentic images, in authentic writing, unique voices. You know, if you look at a giant bell curve of all of this, AI slop content, world content, world class content, way out here, and it's going to get, you know, eyes and attention, and people are going to remember it. So using this stuff in the right ways, I absolutely see it as a powerful tool, obviously, to do lots of kind of the more boring tasks right to make your work more efficient, so that you can be more creative, you can be more thoughtful in your marketing and come up with different ideas and take risks and play in a way that maybe you haven't been able to do for so long because you're, you know, buried in reports and emails and this and that, and so that's where I really see
William Harris 58:43
it taking off. Yeah, you brought up some interesting points though, about how it's fed information significantly that has been generated by humans who all have different types of biases or thoughts or things, and then we continue to reaffirm those by giving thumbs up, thumbs down by different things. And so to your point, it is an engine that could get away from us very quickly if we're not careful about how we are training the guardrails we set up and what that looks like. And I know that Noah travitz was on here. He is the host of invidious AI podcast, and that's a big thing that they've been talking about there over at Nvidia too, is it's like, just where, from an ethical standpoint, do we really start bringing in, you know, oversight versus not oversight? You know, there's, there's, there's downsides to oversight as well, right? For instance, if you train the algorithm to say you are only allowed to give this answer in this context. Well, who determined that too, right? And how do we know that that is actually an okay answer to give in this situation? And so it's a tricky ethical problem that we are going to get to figure out how to solve here in the next couple of years, probably very quickly, we have to start figuring out what this looks like. Yeah. Yeah,
Britney Muller 1:00:00
and I don't think there's like a perfect solution either. You know, unfortunately, it's, it's a messy, hairy problem. But the first step that a lot of like the professional AI ethicists have identified, is transparency. We have little to no current transparency into what has gone into the biggest and the best models to today, and they hold that tightly behind closed doors, because, right, it's a revenue generating machine. But if we had something like what Margaret Mitchell and to meet gebru have worked on, like model cards, which are essentially these nutritional labels for AI models. Here's the languages it's been trained on. Here's the websites and the data sets it's been trained on. I was lucky enough to be part of the largest open source multi lingual model in the world, called Bloom and on the bloom site, on Hugging Face, you can see the model card that explicitly communicates what it's been trained on, what it's good at, right? And so that that gives researchers an idea of, oh, it's probably good at these languages, right? It's, it's not been trained on these, like, nastier sites. And so maybe it's more kind or thoughtful, and all the llms have different personality. A great example is like GPT two was so whimsical and romantic because it was trained so much on Shakespearean text. So it was always like, real wordsy, and it was so funny, it was so romantic. But now, you know, we've trained more of the internet, and we've lost that, that part of it, you'll also notice, like Google's llms are a lot more rigid and a little totally because, yeah, because they've cranked down that randomness feature, they don't want it spewing lots of random, incorrect, inconsistent. It is
William Harris 1:02:02
more logical. I find all of my Yes, yep, it's less
Britney Muller 1:02:06
like interesting and fiery and the personality that temperature, easy way to think of like that temperature randomness setting is, you know, the lower the temperature, the colder the outputs are. You know, it's quite stiff and redundant. And then the higher the temperature, the like, hotter, spicier, crazier, personality, the outputs get, and companies range wildly in how they have that temperature setting selected.
William Harris 1:02:33
I like that nutrition facts card, almost sort that you're talking about. That's a very interesting way, because it allows us as consumers right now of food to say I'm okay with that, I at least have some awareness of what I'm putting into my body. And yeah, I want those ingredients, or I don't want that blue number 40 or whatever that thing is, right? And so, like, there are things that we can say, and I don't even know if blue number 40 is one, I just red number 40, I don't know. But like, there's like, these things, right? So it's like, I don't want that in my diet. To your point, it's like, if there's some element of things that we can look at from a perspective of how it's been trained and what's been ingested by that, then we have an idea of what types of answers we're likely to get from it. I think that's a very interesting idea. I'm not an ethicist. I don't, I don't know, I don't know how good that is or isn't. But I appreciate that idea a lot. Yeah, I want to dig into who is Britney Muller. So speaking of the human in the loop, I want to talk about the human who has discussed the content here today. And I want to talk first about some of your your ways you showed a Pomodoro calculator or Pomodoro timer, but that is not how you keep track of time. How do you keep track of time?
Britney Muller 1:03:44
Oh my gosh, this is a more recent development. Well, I've been doing this for maybe six months now, but when I need some sort of like ritual to get me into deep work and to keep me sort of mindful that I'm in deep work zone. And so one way that helps me do that this is so weird is these taper candles. And so I don't know there's something about like lighting a physical candle at my desk and it there's something like soothing about it, and I use it to keep track of how much deep work I've done. And I know, like, if I put in three really hard days of deep work trying to maximize it out there should be no candle left, right. Like, it's just a way for me to to start to play around with that a little bit and keep my focus. I have some really incredible mentors, one who is always kind of on my case about focusing and, you know, communicating the value of how it works, like compounding interest. You know, like some of the best startup entrepreneurs in the world. And you know, you. Business savvy people understand the power of focus and saying no to most things, and so I really try to try to do that and use any tools I can to to
William Harris 1:05:12
keep that even candles. So what's interesting is this is not a novel concept of candles being used for time. Are you aware that, like, you know, let's just say, prior to electricity, how we set alarm clocks using candles? No. Okay. So apparently, what they would do is they would put like, a nail right into the candle at about, like, the amount of time that they expect that that's like, by the time it burns to this, it'll fall, and the nail hits the metal plate, and you're like, oh, okay, it's time to wake up, or whatever this needs
Britney Muller 1:05:38
to be. My gosh, William, what?
William Harris 1:05:42
Yeah. I mean, I didn't do this, but I've heard this so, you know, I can't verify that information. Okay, speaking of, let's just other say, like things that we like to learn about. There's a quote that you have mentioned before that you've you've told me, like that you kind of live by Yeah, in being able to have, yeah, go ahead,
Britney Muller 1:06:07
yeah, it is. Nothing will stop you being creative. So effectively, is the fear of making a mistake.
William Harris 1:06:16
I love it. It's so true. Why? And like, I have my thoughts about it, but it's like, I want to hear your thoughts, like, why this quote? Why do you live by this quote? I It
Britney Muller 1:06:25
was so funny. I remember vividly going to lunch with this really smart older business woman in Colorado who once told me like Britney, because I was explaining my work and what I do and get I get real caught up in quality and research and all this stuff. And she goes, what is the enemy of good? And I immediately thought, like, bad, right? Like, what? And she goes, it's perfect. Perfect will kill you. Like, I've been trying, and this is so much easier said than done, but like, imperfect action is something I really struggle with. I really struggle with because I see it, I will reflect back on it and like, feel like shame that it wasn't quite to the standards I know I could have done. You know that talk could have been better, this research could have been more comprehensive. This whatever. I'm constantly harping on myself about that. But then you see, you know, everyone else putting kind of junk into the world and different things take off, right? And so taking imperfect action is so powerful and gaining some of that momentum. And I just, I need constant reminders, otherwise I get bogged down in
William Harris 1:07:40
Are you familiar with the book Outliers, by Malcolm Gladwell, yeah, I love it. Okay, there's a part in there where he talks about, and I might get it a little bit wrong, but essentially, I think it was like pottery, maybe, where they're like, one group was given the task of making the absolute best piece of pottery, like a vase or something like that, right? And they were, like, just one you like, you spend all your time practicing whatever, but like, it's like, you're going to make one vase at the end of this 30 day period. I'm making up numbers. I don't remember it, right, but I understand the concept, yeah. And then the other one was given, like, the task, it's like, you just need to make as many vases as you possibly can for the next 30 days, yeah. And what's interesting is the group at the end of these, you know, this period that didn't have the constraints on making the best but just simply made as many as they could. By the time they got to the end of this, they made significantly better outputs than the group that focused all their attention on just trying to make the best one, and they had 30 days to it. And I think to your point, that is where, you know, perfection really does become the enemy of doing something well, because it gets in the way of actually just getting all of those reps in to get to the point where you can have a good output
Britney Muller 1:08:48
Exactly. It stifles your feedback loop. You know, you don't know like what works and what doesn't, because you're you've held back on putting so many things out there. Yeah,
William Harris 1:08:59
yeah, there's two stories that you told me, and I'll let you choose which one you want to tell, because they're both really interesting and fun, and both of them have nothing to do necessarily with anything other than we just enjoy having a good time. One was ghost migrations. The other one was grandfather clocks. I'll let you choose, because we're running out of time. Which one do you like to tell more?
Britney Muller 1:09:22
Um, they're both so good and sort of like similar in different ways. I think so entrainment, which is this concept of pendulums that swing near each other, sync up after a period of time. There's really interesting videos on YouTube, you can watch where different pendulums or grandfather clocks are swing every which way, and then after a period of time, they're all in synchronicity. And research has been done about that with humans, right? So when we're with people, when we're communicating with others, we tend to sync up. Our breathing, our heart beats up. Women's menstrual cycles sync up. And it's so fascinating to me, and it's such like a cool, concrete, scientifical kind of pull in the ground to reference that you know, you really are who you surround yourself with. And I think it's important to be really mindful of that. You know, where, who do you feel energized around? Who makes you feel good? Like tap into that throughout your life and your career, because life's so short, like work and be around people that you enjoy, that make you feel good. And you know, otherwise, you're gonna get all out of whack and you're entrainments gonna go the opposite direction.
William Harris 1:10:43
I love that. And what's interesting about that is that you might start off in the complete opposite direction, right? Like you might sort of starting off complete opposite, and then eventually they, like, merge together and like, you're going the same. And so when I think about that with like, people like, because you just, like, brought that back to people, is, let's go back to opposites that aren't really opposites. You just talked about how, like, the opposite of good isn't really bad or whatever, like the or, you know, the antithesis, whatever, it's perfect. Perfect is going to block you from getting there. Another opposite that I think people miss a lot is the opposite of love. Isn't hate, it's apathy, right? Because if you hate someone or something, for the most part, it's because you have so much care about what they think or say at that moment that that's why it has so much of it can command so much of you. Apathy is truly the opposite of love, right? It's like there is no love left whatsoever. But so to your point, sometimes those people you surround yourself with, they might irritate you in the best possible way, and that irritation is that idea of like, iron sharpening iron, right? And so it is that pendulum that maybe it's like, the reality is it's going to help get you to the point where you are in in synchronicity with, you know, each other, absolutely,
Britney Muller 1:11:51
absolutely, I think there's so much value in surrounding yourself with different kinds of people and people that hold different opinions and think differently than you. And there's so much power in that. Yeah, I think
William Harris 1:12:02
that's one of the things that we've been missing from the social media bubble. I think that's one of the things that you almost hinted at here a little bit, with the idea of the llms too, that, you know, we might be creating, you know, another thing that has this significant hive mind as well that we have to be careful of, that we all just start saying, Well, you know, that's what chat GPT says. And you know, all the llms are saying the same thing now, and so that's what I believed in as well, versus having that ability to have like thoughts outside of that, literally,
Britney Muller 1:12:28
that's perfectly, perfectly said. There was a really good quote on this in this paper. No, I don't even know if I know we're at time, but I'm like, oh my god, what you just said, like, that is literally the perfect summarization, like bow on top with the llms, the different issues, the feedback loop, these echo chambers, you know, not something that we deal with on a regular basis and continues to cause some of these issues. Yeah,
William Harris 1:13:00
Britney, it's been so much fun getting to know you, listening to you, learning from you. If people want to work with you, take your class like, what is the best way for them to get in touch? Stay in touch? Yeah,
Britney Muller 1:13:11
that's a great question. So they can check out my course on Maven. It's called actionable AI for marketers. I'm trying to run it at least once a quarter, if not more later, towards the end of this year, and so you can kind of keep an eye out for that. Other areas. I'm on LinkedIn. I'm on data side, 101, where I continue to provide different resources around AI and actionable AI for marketers and other professionals, and so those are probably the best ways to find me
William Harris 1:13:49
Awesome. Well, again, I really appreciate you sharing your time, sharing your wisdom with us. It's been a lot of fun. Talking to you.
Britney Muller 1:13:56
This has been so fun. Thank you, William, thank you for
William Harris 1:14:00
joining everybody. I hope you have a great rest of today.
Outro 1:14:04
Thanks for listening to the Up Arrow Podcast with William Harris. We'll see you again next time, and be sure to click Subscribe to get future episodes.