Key Takeaways

  • Set Guardrails Early: Establish an AI policy, form a diverse AI committee, and choose secure, paid tools to ensure responsible use and data protection.
  • Boost Efficiency with AI: AI enhances knowledge search, analyzes feedback quickly, and automates tasks—freeing up time and improving productivity.
  • AI Needs Oversight: AI tools can be helpful but aren’t foolproof. Human review is still essential, especially for complex tasks like accounting.

The world of artificial intelligence (AI) is vast, with seemingly endless possibilities. Its potential is exciting, offering practical applications across various domains, from answering questions to generating financial statements to improving business operations. However, it is essential to approach any new tool with a healthy dose of professional skepticism. With proper planning, generative AI can help us achieve our business objectives in new and exciting ways.

As AI becomes more integrated into our daily lives, employees will naturally turn to it, whether organizations encourage them to do so or not. Because of this, it is important to ensure that we are finding safe solutions with the world of AI. To ensure its proper use and protection, Jon Hilton, LBMC’s Shareholder-in-Charge of AI and all around AI Guru, answered three basic questions about how a business should best approach artificial intelligence tools.

What essential steps should organizations take when considering using artificial intelligence tools in their business?

Jon advocates three essential steps that all organizations should take over the course of 90-Days:

  1. Establish an AI policy; any policy is better than none.
  2. Create an AI committee of diverse users to identify needs.
  3. Select the AI model(s) best suited for the organization’s needs.

While these steps may seem straightforward, each has important considerations to keep in mind.

When establishing an AI policy, an organization should explicitly state that the use of free AI tools is prohibited. Free AI models often use the data provided to improve their functionality, which can expose the organization’s data to other users. Additionally, when using any AI tool, avoid sharing protected health information (PHI), personally identifiable information (PII), or proprietary information. Samsung learned the dangers of not protecting sensitive information the hard way in 2023 when three separate leaks exposed sensitive information in a worldwide scandal.

When creating an AI committee to evaluate AI options, it’s crucial to include IT representatives as well as individuals from various levels of the organization outside of IT. Members should possess an innovative mindset while being mindful of the company’s realistic needs. Diverse perspectives will help identify comprehensive needs and opportunities that AI can address. Moreover, since many AI programs charge per user, this committee can also determine who should have access to the organization’s various AI tools.

Selecting the AI models best suited for an organization’s needs can be challenging. Many people have encountered AI in the media, but the fictionalized, often vilified versions of AI do not accurately reflect reality. AI theory encompasses four main stages: reactive, limited memory, theory of mind, and self-awareness. Only the first two stages, reactive and limited memory, exist in our world today. Reactive AI is what you encounter in search optimization engines or as a digital opponent in games. Limited memory AI, like ChatGPT, retrieves information but does not form opinions or guarantee accuracy. It’s important to recognize that only these two types of AI are available for practical use, and they both have limitations. For limited memory AI, Jon Hilton recommends our clients use one of three paid options: Microsoft Copilot, Google Gemini, or ChatGPT Enterprise. These three programs are consistently the best performers and are all around $30 per user, per month.

How can AI actually help enhance business organizations?

Jon believes the benefits of AI tools be categorized into three main areas: knowledge sourcing, feedback analysis, and automation solutions.

Knowledge sourcing is a common application of generative AI, often replacing traditional Google searches or hours spent sifting through company documents. The challenge, however, is verifying the accuracy of the information retrieved. At LBMC, we emphasize the principle of “trust but verify.” One way LBMC assists clients with this is by building databases for AI to reference, ensuring that searches pull from trusted sources, ultimately saving employees hours of work.

AI can also play a significant role in feedback analysis. It can quickly identify common themes in survey data, revealing insights related to satisfaction, dissatisfaction, areas for improvement, and data trends. Data sorting that might have previously taken countless hours can be accomplished in mere minutes using AI, allowing staff to focus on solutions rather than problems. For those using Excel for data export, there are now advanced AI generation tools that can help you create charts and graphic displays of data as well.

AI’s role in automating solutions can cause employees to worry about losing their jobs. However, the reality is that these AI tools simply enable employees to be more productive. At LBMC, we have helped clients use AI to record and summarize medical providers’ follow-up calls with their patients, improving the accessibility of information about what calls have been made and critical details revealed across those calls. This efficiency allows clinical staff to provide better solutions for patients in a shorter amount of time. Additionally, AI scheduling tools now allow patients to book appointments 24/7, leading to improved satisfaction and faster treatment as they are no longer limited by calling during office hours.

What limitations should we consider when using generative AI?

Having realistic expectations about AI capabilities is crucial. Jon compares AI to a toolbox, explaining that no single tool can address every task. The AI tool used for scheduling patients will differ from those used for medication protocols or patient satisfaction surveys. AI isn’t science fiction, but it is extremely useful when applied correctly.

One area where AI continues to face challenges is in accounting services. In 2022, it was found that ChatGPT 3.5 could not pass the CPA exam. This fall, I decided to test ChatGPT Enterprise using fictitious data to explore its strengths and weaknesses in basic financial accounting tasks.

In one experiment, I asked ChatGPT to generate a journal entry for a hypothetical gain/loss transaction. The initial entry it produced was incorrect and unbalanced.

, I asked ChatGPT to generate a journal entry for a hypothetical gain/loss transaction. The initial entry it produced was incorrect and unbalanced.

However, after prompting it for a correction, it was able to provide the accurate journal entry.

However, after prompting it for a correction, it was able to provide the accurate journal entry.

In another test, I used a dataset of over 10,000 journal entries and asked ChatGPT to identify any entries that were unrelated to the business. Initially, it failed to recognize all the unrelated entries and required a second prompt to identify them properly. Although this process was significantly quicker than manually reviewing the data, it still wasn’t immediately accurate and required my review. These examples illustrate that while AI is still in a formative training stage, it holds considerable potential for applications such as data retrieval, data analysis, and process automation.

If you have any further questions about how AI could benefit your business, please reach out to LBMC’s Jon Hilton, jon.hilton@lbmc.com, for a meeting to see how we can further assist you.

Content provided by Victoria Gentry, Senior Tax Accountant at LBMC. She can be reached at Victoria.gentry@lbmc.com.