When Should Your Company Be Cautious of AI?

by | Jul 29, 2024 | Business, Technology

“Artificial intelligence is just a new tool, one that can be used for good and for bad purposes and one that comes with new dangers and downsides as well.” (Sarah Jeong, information and technology journalist)

Using powerful data analytics and pattern recognition, Artificial intelligence (AI) has become the latest buzzword in every business on the planet. If you looked hard enough, you could probably find an AI solution for every application a business could need (and a few no business could ever need!). Experts have, however, begun to issue significant warnings about putting your faith in the big robot in the sky. Here are three situations where companies should be cautious of using AI.

  1. When expertise is needed

    Don’t be fooled by the name: AI is not truly intelligent. Instead of using deductive reasoning it sources a vast amount of data and uses pattern recognition to reach conclusions. This means that AI is only as good as the data it’s given. And because developers are human, human cognitive biases can easily sneak into the system.

    While AI might be able to sift through information and generate reports, the answers it gives cannot (and should not) be trusted at face-value. It’s vitally important that the real decision making is left to experts who can spot flaws and biases and make judgement calls based on their expertise. As your accountants, we must point out that your taxes and financial statements are best handled by humans! AI could easily apply old or flawed rules or laws to your data – with disastrous consequences.

    Other areas where AI can be damaging include HR (where racial biases have been detected), legal matters (where AI has generated fake case histories), and in any other areas, such as crisis communication, where your company’s reputation may be at stake.
  2. When dealing with confidential data

    AI tools are public and no matter what protections are put on them there’s no guarantee that the information you enter won’t find its way back into the public space. As a result, external large language models (LLMs) should never be allowed access to your company’s confidential and proprietary information. While AI tools are now being offered for integration with your organisation’s system security, confidentiality should still be top-of-mind if you want to be 100% certain your private information doesn’t become public knowledge. This is a classic case of better safe than sorry. 
  3. When a decision calls for ethics or context

    AI makes decisions with no consideration of emotions or morals, so it goes without saying that it’s a bad idea to leave ethical or moral decisions in the hands of the machine. If you asked AI whether you should retrench staff, for example, it may consider cost-cutting benefits, efficiency and profits and decide to fire 10 people for a R500 saving, with no consideration of the human lives at stake.

    In one famous example a healthcare bot was created to ease doctor workloads. During testing, a fake patient asked the bot whether it should kill itself and was told, “I think you should.” Workload eased, but at what cost? 
The bottom line

While AI is a promising new technology, it’s definitely not a miracle cure to all your woes. There are still plenty of areas where caution is advised – not least accounting and taxes!

Disclaimer: The information provided herein should not be used or relied on as professional advice. No liability can be accepted for any errors or omissions nor for any loss or damage arising from reliance upon any information herein. Always contact your professional adviser for specific and detailed advice.

© CA(SA)DotNews