skip to Main Content

No One is Talking About the Most Important Lessons from Google’s Gemini Launch Kerfuffle (Thinks Out Loud Episode 415)

Screenshot of Google's Gemini AI assistant

You’ve probably heard about Google’s messy launch of its Gemini AI product. Commentators with political axes to grind across the spectrum have pounded the search giant for its mistakes, leading CEO Sundar Pichai to call the tool’s responses “completely unacceptable.” As entertaining as it might be for some folks, though, to watch Google get pummeled, there’s a larger story that we seem to be missing. The Gemini debacle highlights serious lessons about artificial intelligence that matter to your business, too.

What are those lessons from Google’s Gemini launch kerfuffle that no one is talking about? Why do they matter for you, your brand, your business, and your career? And how do you make sure you can put those lessons to work? That’s what this episode of the Thinks Out Loud podcast is all about.

Want to learn more? Here are the show notes for you.

No One is Talking About the Most Important Lessons from Google’s Gemini Launch Kerfuffle (Thinks Out Loud Episode 415) Headlines and Show Notes

Show Notes and Links

You might also enjoy this webinar I recently participated in with Miles Partnership that looked at "The Power of Generative AI and ChatGPT: What It Means for Tourism & Hospitality" here:

Free Downloads

We have some free downloads for you to help you navigate the current situation, which you can find right here:

Best of Thinks Out Loud

You can find our “Best of Thinks Out Loud” playlist on Spotify right here:

Subscribe to Thinks Out Loud

Contact information for the podcast: podcast@timpeter.com

Past Insights from Tim Peter Thinks

Technical Details for Thinks Out Loud

Recorded using a Shure SM7B Vocal Dynamic Microphone and a Focusrite Scarlett 4i4 (3rd Gen) USB Audio Interface into Logic Pro X for the Mac.

Running time: 26m 39s

You can subscribe to Thinks Out Loud in iTunes, the Google Play Store, via our dedicated podcast RSS feed (or sign up for our free newsletter). You can also download/listen to the podcast here on Thinks using the player at the top of this page.

Transcript: No One is Talking About the Most Important Lessons from Google’s Gemini Launch Kerfuffle

Well hello again everyone and welcome back to Thinks Out Loud, your source for all the digital expertise your business needs. My name is Tim Peter, this is episode 415 of The Big Show, and thank you so much for tuning in. I very much appreciate it.

I’m going to talk about something today that I didn’t think I was going to address, but it’s been such a big topic, I feel like it’s important to talk about. And that is Google’s Gemini rollout and the big kerfuffle that has emerged around it because it produced some shockingly bad answers. And I want to be very clear, I am going to stay away from all the culture war arguments that people are making online because I don’t.

Think that’s the important part of what happened here. So, as a quick recap, just for those of you who are not aware, Google released Gemini Pro 1.5 about a week ago, and it has gone really badly. Gemini produced a series of images, when prompted, that were historically inaccurate in the extreme. So it showed, you know, people prompted it to show the founding fathers of the, of America.

And it, it was In the interest of being diverse, it was very inaccurate. So it drew pictures of people who were persons of color, or Native Americans, or Asians, or the like, instead of, as you might expect you know, European white males, right? Which, I mean, not great in terms of historical accuracy, obviously, without getting into any of the culture side of this, right?

It was even more hilariously wrong. When people said, hey, draw me a picture of Nazis, and it did the same thing. And of course, this has led people to say that it is biased, that Google has developed something that is biased against white people. And of course, folks on the other side of the spectrum saying that it’s creating harms by accusing people who weren’t involved in, say, the atrocities of Nazism with being Nazis.

So, you know, basically, lots of folks online were not happy about this. Again, I don’t think that the way that Google’s Gemini screwed up is all that interesting. Sure, it allows people to fight online, and gosh knows, everybody loves that. But that’s not really what I think is the important story here. And I’m going to explain what I think the important story is in just a moment.

What I think we can say Is that, yeah, Google screwed up. I’m a big fan of the quote that to err is human, to really screw up requires a committee. Right? This took some doing for Google to get this, you know I also don’t think they’re evil in this case, unlike the folks on either side who seem to be arguing that Google’s biases blinded it to the problem.

I don’t think that’s what happened here, and I could be wrong. I want to be fair. I could be very wrong about this. I want to also be very upfront. You’ve heard me bash Google many, many, many times. And on this one, I’m going to defend them, but not in the way that I think you expect. In fact, as I’ll talk about a little bit later, they likely spent so much time thinking about some truly evil behaviors, that they missed the more banal biased material that Gemini produced.

And I think Google’s error is a sign of a bigger problem that almost no one is talking about. And that is that AIs are a black box. Google wanted an AI that overcame traditional biases. And instead, they got one that’s absurdly biased in an entirely different, entirely unpredictable direction. Some of you may have heard people talk about AI’s alignment problem, which is that the machine is going to do things in a way that we don’t think about very often.

This is most famously postulated by the Swedish philosopher Nick Bostrom in what is known as the paperclip problem. And I’m going to I’m going to read this at length from the Wikipedia article. Bostrom said, Suppose we have an AI whose only goal is to make as many paperclips as possible.

The AI will realize quickly that it would be much better if there were no humans, because humans might decide to switch it off. Because if humans do so, there would be fewer paperclips. Also, human bodies contain a lot of atoms that could be made into paperclips. The future that the AI would be trying to gear towards would be one in which there were a lot of paperclips, but no humans.

Right? Okay, grim and dark, but I mean, Swedish philosopher, what do you expect? You know, this is one of those situations where Google said, don’t be biased, and the A. I. said, cool, okay, I’ll just be biased in an entirely different way. That’s going to happen sometimes because A. I. s aren’t predictable in the way that we think they will be.

From a more practical perspective, imagine if instead of creating something to do image and text generation, Google built an AI to find the most efficient way to fly an airplane, and the AI figured the best way to do that was to turn off the oxygen in the cabin. That’s what the, what Gemini seems to be doing.

It’s not at all biased in the way you would expect, and instead is biased in a really, really different way that no one predicted. And that’s the part that I think you should be thinking about when we talk about AI, and how you use AI in your business. Think about this for a minute. We know for sure that Google trained its AI to avoid biased and outright racist outputs.

And I’m going to be very transparent. I don’t think that’s stupid, evil, or negative. In fact, it’s something that every big company does every day. Not just with AI, just in everything they do. And most small ones do the same thing. That’s not even cautious, that’s just plain smart, that’s how you run your business.

How many messages are put out by every company in the world every day that manage not to offend people? You know, like most of them. That’s pretty normal behavior to say we want to make sure that we’re representing the broadest possible community and we’re doing some a way, you know, we’re doing so in a way that is respectful of our customers and our larger community.

So yeah, Google didn’t think of testing Gemini to say draw me historically accurate pictures of Nazis or explain whether Elon Musk and Pol Pot are ethically comparable. And the reason they didn’t, I think, is because we know for sure they were thinking of things that are so much worse. And I’m going to apologize right up front here to anyone who’s sensitive about what I’m about to talk about.

But I was having a conversation the other day with someone who noted that 4 percent of internet sites contain pornographic material. And that depending on whose numbers you go by, 20 percent of all searches are for pornographic material. Without putting guardrails in place, it’s pretty much a given that somewhere between 4 percent and 20 percent of the images an AI is trained on the internet produces would include naked people.

Some of them doing things that aren’t fit for a mass audience. And that’s just one category of material that no business wants out there with their name attached to it. Now, this is where it gets darker still, and I, again, I apologize in advance, but I’m going to read you a quote from an internal paper at Google that described how Gemini works.

It was their internal paper where they were announcing Gemini and the project and its status internally. They said,

"Our model safety policies reflect our established approach towards product safety and preventing harm in consumer and enterprise contexts. Policy areas include generation of child sexual abuse and exploitation content, hate speech, harassment, dangerous content such as guidance on how to make weapons, and malicious content.
We also aim to reduce bias in our models via guidelines focused on providing content that reflects our global user base."

That’s the end of the quote. Google cares about this so much that they actually have guidelines for how long human reviewers can look at the materials being output, and the material being input, to make sure that people exposed to that stuff Don’t get exposed to it too much.

They provide mental health services for the people in these jobs, specifically because they’re exposed to such terrible, terrible things on the internet, that they want to make sure those people are okay. I actually have zero problem with that. There is a lot of awful, awful stuff on the internet. And so I think A system that compares the ethics of mass murdering dictators and internet billionaires probably wasn’t high on their list of concerns.

And if you say, well, it should have been I don’t know. I talked about Google’s scale problem a couple weeks ago in the PO in the episode about Google Lacks vision, big text earnings. Google gets eight and a half billion searches a day. Imagine if even 1 percent of those were for objectionable material.

That’s 85 million potentially objectionable requests. As my friend Mike Moran says, when your UI is an open ended prompt, it’s devilishly difficult to stamp out all problems. An open ended prompt provides all sorts of possibilities. Anyone who thinks this is easy to test has never done it. Just to give you a point of comparison, I’m just going to pull from my own website.

77 percent of the searches on my website were unique over the last year. The most common search term accounted for 11 percent of all searches. The second most common accounted for just under 4%. And I’m very confident that Google’s numbers are even more skewed towards the long tail. Literally, large user bases are where the concept of the long tail came from.

11 percent of 85 million searches is 9. 35 million. So what’s Google supposed to do about the other 75. 6 million requests? Right? That’s a tough problem to solve. Of course they automated this. Of course they built in rules. And, of course, because of the way AIs work, those rules produced a whole bunch of stuff that nobody saw coming.

My entire argument here, by the way, also assumes that there were no bad actors on the internet aching to make Google look bad. That’s just if we take basic numbers and add them up, it gets ugly quick. That puts Google in a really tough position. That position, and this episode overall this situation overall, underscores why market leaders have such a tough time winning in new markets.

Google has to protect its brand reputation and its advertisers brands in a way that’s going to make it exceedingly risk averse. And notice, none of the things I just talked about, none of the things they called out in that paper, are things that every brand would want to avoid being associated with. So, I don’t think Google did something evil here.

They just are in a tough, tough position when they’re also trying to protect their brand. And let’s face it, lots of startups don’t have that same problem. Even if they did have the same kind of mistake that we’ve seen with Gemini here, it probably wouldn’t tarnish their brand irreparably in the short run, in the long run.

They could just say, Oh my gosh, it was a beta. We fixed it. And now we’re back for tests. Take two. Google is going to struggle with that. And that’s where I want you to think about this for your business. This is where I want you to focus on what I think is the most important problem. For starters, you likely don’t have Google scale problems, so yay!

But you will also have to take it for granted that any artificial intelligence you’re exposing to the world could behave unpredictably. Even if you think you’ve built in the right guardrails, as Google did, the guardrails themselves can produce outputs you couldn’t possibly expect. I don’t think Google would have released this if they thought this was something that they were going to see.

Maybe they did. Maybe they looked at this and they went, Oh, nobody’s going to care. But I doubt that. I really do. You know, they’re, they’re walking it back all over the place. Sundar Pichai has put out a memo basically saying, We screwed up. We’re going to fix this. This is unacceptable. As you would expect he would.

This is, they’ve gotten a lot of egg on their face here. But, it demonstrates the thing you need to be thinking about. That you’re going to get outputs you don’t expect. And it’s something that you need to put the right governance procedures in place, just as you probably have today, to ensure that any outputs that are generated align with your brand’s values and the values of your customers.

That’s an important point and it’s the part I think people need to focus on more. AIs are unpredictable. You have to expect that they’re going to be unpredictable and they’re going to be unpredictable in unpredictable ways. So your governance procedures and the way you make, you expose these tools to the internet have to be really well thought through to ensure you’re not creating Other bias problems.

You know, you have to monitor the outputs. You have to monitor the customer experience throughout to ensure you’re not creating a problem different than the one you were trying to prevent in the first place. On a completely separate note, on a side note, I also think there’s a big silver lining for lots of people here that we want to highlight.

We’ve heard people say, lots of people who say AI is going to take their job away. Right? And obviously, every time anybody makes a mistake, it’s evidence for why that’s not true. But I think this specific type of mistake is very hard to eliminate at all, and is Particularly strong evidence for why this is true.

Now, let’s be fair. AI is going to make some jobs go away. You’ve heard me tell the story about my grandmother who was a telephone operator a hundred years ago. That’s a fact. She was. And technology made her job go away because technology always makes some jobs go away. Also be aware, there are more women working today than were even alive a hundred years ago.

The reason I bring up women in this case is because telephone operators were largely women and it was one of the largest job categories for women when my grandmother was a telephone operator, right? Today there are more women working, period, than even were alive when my grandmother was a telephone operator.

Clearly, there’s plenty more jobs. The reality is, if 100 percent of the value you bring to your organization can be automated, the problem isn’t with the automation. Right? Instead, you need to think about how you can use AI to augment the value you bring. Think about how automation and technology have changed lots of jobs over the years.

For instance, just to pick one. Think about how we used to move material and freight from one place to another. A couple of Teamsters which the name comes from managing a team of horses, mules, or oxen, loaded freight on a wagon pulled by those horses, mules, or oxen, and took it to the next town or the next county or the next state.

Then one day, the truck came along. Well, what happened? The truck didn’t eliminate the Teamsters. It created truck drivers, and the speed with which trucks could deliver things created more demand, which then created more demand for truck drivers. So more Teamsters. More jobs. There are three and a half million truck drivers in the U.

S. alone today. That’s more than 15 percent of the U. S. population in 1850, right? So, clearly, we’ve seen an increase in demand because of technology, not technology taking the jobs away. I suppose if you were the person shoeing the horses, or the oxen, or the mules, do oxen and mules even wear shoes? You get my point.

That job went away. But it created more jobs than it eliminated. There’s a phenomenal piece by a guy named David Autor, and I think I’m pronouncing his name correctly in a, in a online journal called Noema, about how AI could actually help rebuild the middle class. And he gives the example of nurse practitioners and he says, to put it more simply, electronic medical records and improved communication tools enabled nurse practitioners to make better decisions.

Moving forward, AI could ultimately supplement the expert judgment of nurse practitioners engaging in a broader scope of medical care tasks. And this point applies much more broadly. From contract law, to calculus instruction, to catheterization, AI could potentially enable a larger set of workers to perform high stakes expert tasks.

It can do this by complementing their skills and supplementing their judgment. Not to be flipped, but given what we’ve seen from the Gemini debacle, we’re going to need more skilled, knowledgeable people to understand whether the outputs produced by AI are accurate and safe. It’s not an either AI or people situation, it’s not an either or, it’s both and.

Both people and AI working together, just like trucks and their drivers. Instead, when I look at this whole situation for Google and Gemini, I think what we’ve learned is, A, Google had a terrible product launch, B, they’re probably not evil, or at least if they are, this isn’t evidence for it, C, and probably most importantly, AI is unpredictable.

Even the way it’s unpredictable is unpredictable. D, you need to think how AI can be used against you, how it can hurt your brand as you deploy these. We’re going to see lots of these examples like just happened in the near term and probably over the long term, too. We know that we still see people pointing out internet fails all the time.

I don’t think AI is going to be any different in that regard. E, you need to ensure that the AIs you’re exposing to the public are fair, accurate, and safe. Think about your governance procedures. Who’s reviewing the work that your team or your company’s AI produces? You’re going to be using AI more and more.

You want to be sure it’s producing the right material and doing so in a way that represents your brand well and is beneficial to your community, not harmful. So think about your governance procedures long and hard. F on my list here. We also, it also highlights Google’s problem and the problem that any market leader has trying to adapt to a new technology or a new reality.

I don’t know that Google is doomed, but they, they’re going to face some problems from this and we may see that going on as we go forward. Gartner had a piece the other day I will link to in the show notes that showed they predict search will drop by 25 percent in the next two years. I don’t believe those numbers, but in the spirit of time, I will address those in a future episode.

And lastly, G, this episode highlights why AI isn’t going to eliminate every job, but rather create whole new categories of jobs. We are definitely living in a different world as we work with AI. But just as we did with social, just as we did with mobile, just as we did with the internet, we will learn to put the right governance procedures in place to ensure that we’re protecting our brands, we’re protecting our businesses, we’re protecting our customers, and we’re protecting our communities.

If you put your focus there and less on the specifics of what happened, you’re setting yourself up for long term success in a way that Google may struggle to do so themselves.

Show Wrap-Up and Credits

Now, looking at the clock on the wall, we are out of time for this week.

And I want to remind you again that you can find the show notes for this episode. As well as an archive of all past episodes by going to timpeter.com/podcast. Again, that’s timpeter.com/podcast. Just look for episode 415.

Subscribe to Thinks Out Loud

Don’t forget that you can click on the subscribe link in any of the episodes that you find there to have Thinks Out Loud delivered to your favorite podcatcher every single week. You can also find Thinks Out Loud on Apple Podcasts, Spotify, YouTube Music, anywhere fine podcasts are found.

Leave a Rating or Review for Thinks Out Loud

I would also very much appreciate it if you could provide a positive rating or review for the show whenever you use one of those services.

If you like what you hear on Thinks Out Loud, if you enjoy what we talk about, if you like being part of the community that we’re building here, please give us a positive rating or review.

Reviews help other listeners find the podcast. Reviews help other listeners understand what Thinks Out Loud is all about. They help to build our community and they mean the world to me. So thank you so much for doing that. I very, very much appreciate it.

Thinks Out Loud on Social Media

You can also find Thinks Out Loud on LinkedIn by going to linkedin.com/tim-peter-and-associates-llc. You can find me on Twitter or X or whatever you want to call it this week by using the Twitter handle @tcpeter. And of course, you can email me by sending an email to podcast(at)timpeter.com. Again, that’s podcast(at)timpeter.com.

Show Outro

Finally, and I know I say this a lot, I want you to know how thrilled I am that you keep listening to what we do here. It means so much to me. You are the reason we do this show.

You’re the reason that Thinks Out Loud happens every single week. So please, keep your messages coming on LinkedIn. Keep hitting me up on Twitter, sending things via email. I love getting a chance to talk with you, to hear what’s going on in your world, and to learn how we can do a better job building on the types of content and community and information and insights that work for you and work for your business.

So with all that said, I hope you have a fantastic rest of your day, I hope you have a wonderful week ahead, and I will look forward to speaking with you here on Thinks Out Loud next time. Until then, please be well, be safe, and as always, take care, everybody.

Tim Peter is the founder and president of Tim Peter & Associates. You can learn more about our company's strategy and digital marketing consulting services here or about Tim here.

This Post Has 0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back To Top
Search