How AI Can Impact You
Writing an article about AI on May 30, 2023, feels a little like being on a plane midday on March 12, 2020. What would have happened by the time I touched down after 2.5 hours in the air? That was the day the dominoes started falling in the United States: Broadway went dark and the NHL suspended its season. The next day, Delta airlines canceled all flights to Europe, and both the Masters Tournament and the Boston Marathon were postponed.
By the time this article is published online at greenprofit.com on July 1, 2023, there will be new tools to use, new legislation to consider and other differences that none of us can predict. AI development is moving that fast.
On the March 21 episode of the Ezra Klein podcast, guest Kelsey Piper, a senior writer for Future Perfect at Vox, said, “If we have enough time, we can make almost anything that happens to us a good thing. And the problem is if we have enough time.” Generative AI is evolving (generating?) so fast that laws, and understanding, can’t keep up.
How Does it Affect Us?
We’re in an industry where customers still fax us orders. You might wonder, “Do we really need to be concerned about AI?” This is where our collective experience during COVID comes in handy. Remember the breakneck implementation of e-commerce options for customers, also in March 2020? Whether for economic, labor or knowledge reasons, many had not yet implemented any kind of e-commerce options, but conditions beyond our control forced our hands. While those hastily enacted e-comm solutions have evolved, for many, they’re here to stay in one form or another, primarily because customers expect it.
The same will become truer than not, sooner than later, with AI-related services and tools because your customers are also customers of large, global companies with billions of dollars to invest in AI that are already rolling out new solutions—especially in the areas of customer service and order fulfillment. Grocery stores taught our customers who’d never ordered online how to do it and we got to reap the benefits of that acceleration in tech adoption. It already happened with SMS (text) communication, too. The same thing is going to happen with products and services underpinned by AI.
Here’s what customers might expect, sooner than you think:
• A humanlike chatbot that helps them figure out what’s wrong with their plant. (This type of research and development is already happening in larger scale agriculture.)
• A chatbot that helps them select plants.
• A service that “designs” a garden for them if they upload a picture of the space they want to plant and enter details like sun and shade. (Heck, I think someday all I’ll have to do is put my GPS coordinates in and ask some sort of AI assistant to design a garden and that’s all it’ll need).
• AI assisted returns and substitutions—”We’re out of this. Would you like that?” And “that” will be an appropriate substitution, not just something randomly chosen.
• Rattling off a list of plants to an AI assistant and the assistant goes online, locates the plants and places an order for them. (Auto-GPT and BabyAGI are autonomous AI agents that can already complete more complex tasks.)
Our businesses don’t operate in a vacuum. AI advancements and accessibility will also affect your staff because they will be using AI-related tools while going about their daily lives and they’ll automatically want to use them at work, as well. It’s up to you to create guidelines and boundaries.
If there’s a common thread amongst AI developers it’s that nobody knows exactly how the rapid rise of generative AI tools (like ChatGPT) will play out because they’re generative. Many of today’s AI tools take the information they’ve been trained on, coupled with feedback, and make new things, or rather, combine existing words and phrases to assemble new words and phrases, mostly by predicting what words should occur in sequence to answer the question or prompt. String several different AI tools together and they get all kinds of things done. However, unlike humans, the machines don’t have a conscience and have no morals, unless they’ve been trained to. Even then, the outputs can be unpredictable.
Kurt Muehmel of Dataiku, a leading AI company, wrote in a March 2023 blog post, “As it turns out, our understanding of why they work is—spookily—only partial.”
I started writing the article on May 30. On May 31, the Center for AI safety released this statement, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” We have to take AI seriously.
Here’s what you can do now to attempt to set your business up for success in a future with AI incorporated into almost every facet of business.
Learn about AI: We’ve all been using AI-powered technology for years, perhaps without realizing it. Google’s predictive text, Siri, Alexa and “you might also like” recommendation engines are all AI. So are those annoying airline chatbots. They’re annoying because they (and the airline industry) currently aren’t evolved enough to, say, chat with you and help you rebook flights when yours had been delayed or canceled. But they’ll get there soon.
Listen to podcasts like “The Marketing AI Show.” Read articles and watch webinars. Read resources on safe.ai to understand how different AI models work.
Examine AI capabilities of your current software: Almost every bit of SaaS (Software as a Service, such as MailChimp, Shopify, Constant Contact) has or will soon have AI capabilities baked in. You could start by just searching the software name and “AI.” When I searched “Epicor and AI” I got a mid-May article outlining the way ChatGPT is now integrated with EVA, the Epicor Virtual Assistant.
There are two main reasons to get a handle on AI and software:
1. So that you can use those capabilities if they’re actually helpful.
2. So that you understand how your company data and the data of your customers is being handled.
Experiment with AI tools: It’s free to sign up for ChatGPT. You can now add Bard, which is Google’s Generative AI language model (like ChatGPT) to your personal Google account. Network administrators can add it to Workspace accounts. The city of Boston has just done so and issued clear, concise explanations and guidelines that could serve as models for other businesses and municipalities. Read them HERE. Play around with the tools by asking them gardening questions or by requesting them to generate a 31-day social media topic calendar. The only way to see how these tools work is to work with them. Find what you like and what you don’t. One thing I like about the city of Boston’s guidelines is they’re requesting that personnel disclose when they use generative AI tools and which model they used. It’s a good idea to build trust and transparency.
Caution: Do not upload identifying personal data, company data or confidential data to open AI platforms like ChatGPT or Bard. It’s tempting to dump all of your financial data in the models and ask them to generate reports, but A) the reports might not be accurate and B) now you just gave all of your data to the Internet. Basically, if you wouldn’t post it on social media, don’t upload it to an AI tool.
Talk with your service providers about how they use AI: Because of the rapid release schedule of AI developments that affect daily business operations, such as Google search and Google advertising updates, it’s important that any marketing service providers such as web developers, social media marketers, pay-per-click advertising companies and marketing firms stay on top of innovations so they can tweak and adjust what they’re doing accordingly. Ask your marketing firm or your copywriter how they use AI. If they say, “I haven’t thought about it,” start thinking about looking for new providers. Anything that impacts the flow of customers to your doors deserves attention. Maybe you don’t need a new email marketing firm, but if your PPC agency isn’t on top of things, they’ll cost you tons of money.
Create AI use guidelines for your business and discuss with staff: Pandora’s box is open and there’s no going back. Rather than ban the use of AI tools, or encourage a free-for-all use of the tools, talk with your staff and create guidelines. Important points to cover:
• AI is not “set it and forget it.” Staff are responsible for fact checking anything generated by AI tools.
• Anything published using AI in its creation needs to be edited to adhere to brand voice and guidelines, including images, text, audio and video.
• Detail what types of data can and cannot be uploaded and used with open AI systems. Make it completely clear that no confidential information is to be uploaded.
• Discuss the importance of human touch and creativity in marketing and customer service activities. Currently, most text created by AI language models like ChatGPT sound like they were created by robots, unless the user is experienced working with the tools. And, at least right now, most of us would rather hear from a human than a bot.
Incorporate time-saving AI solutions into workflows: Just because there are AI tools built into a piece of software doesn’t mean they’re helpful. Spend time experimenting with what actually saves time or generates useful ideas and leave the rest alone. There will always be newer, more and better tools coming. No need to drink from the fire hose. A Stanley Tumbler is just fine. GP
Katie Elzer-Peters is the owner of The Garden of Words, LLC, a green-industry digital marketing agency. Contact her at Katie@thegardenofwords.com.
Ready to tackle AI? Have questions? I’d love to hear from you! Send me an email: firstname.lastname@example.org.
AI Lingo to Know
Agent: A program that receives inputs and then takes action based on the way it’s been programmed and the training (data, feedback) it’s been given. Chatbots are agents. Technically, thermostats are agents because they receive inputs (temperatures) and act accordingly (turn heat or air conditioning on or off).
Hallucinations: In AI, the term is used to name false assertions or answers generated by large language models. At this stage of AI, hallucinations are frequent and poorly understood, making the need for human fact-checking of AI-generated outputs imperative.
Large Language Model (LLM): The most talked about AI program in spring 2023, large language models are programs trained on huge amounts of data (think everything available on the Internet), and programmed to generate answers in a conversational way using predictive text. Widely used LLMs include ChatGPT (Created by OpenAi), BARD (Created by Google) and Bing (Created by Microsoft).
Prompt: Instructions given or questions asked to an AI program to request a response or output such as “Write me a 400-word blog post about how to grow tomatoes in Minnesota. Include subheadings and use information from University of Minnesota cooperative extension.”
SaaS (Software as a Service): Subscription, cloud-based software (not software you install on your computer) such as MailChimp, Shopify or Canva. GP