
Today, Australian businesses use AI for tasks such as drafting, reporting, analysis, and administration. You're probably one of them, but have you considered the risk of starting when your business information is outside the store's control? The problem, I am sure, is not foul play by the AI company but the storage of information. This is what many business people miss: where is the information in an AI service?
Key Takeaways
- Data movement risk: Online AI moves prompts and files into third-party systems, creating privacy, confidentiality, and retention risks.
- Control advantage: Local AI keeps prompts and files inside a business-controlled environment, giving retailers stronger control over access, logging, and deletion.
- Privacy obligations: Personal information in AI prompts can trigger Privacy Act duties about collection, use, disclosure, accuracy, and security.
- Commercial exposure: Confidential business information, such as supplier terms, pricing logic, and dispute strategy, can create serious risk even when privacy law does not apply.
- Shared access risk: Shared AI accounts can expose past prompts, uploaded files, and internal thinking to unauthorised parties.
- Tiered policy: The right answer for most retailers is not all-local or all-online.
What Is Local AI vs Online AI for Business?
Local AI is AI stored on your computer, and that runs on it. Right now, it's really hot with a lot of public interest. The two biggest advantages people see are cost and privacy. Today, local AI can deliver about 80% of what online AI companies can give you.
Let us discuss a real user case.
A staff member ask an AI tool to rewrite their reply to a customer complaint, what is being stored on the AI is the angry shopper's name and address.
How Does the Privacy Act Apply When Staff Uses AI?
Under the Privacy Act, which would apply to AI use if it involves personal information.
The government says you must take a careful approach to AI use involving personal information, conduct due diligence on the product, and build human control, privacy governance, and staff procedures around its use.
Where it is actually frightening is that AI hallucinates. In one study I saw, the rate was between 0.6% and 2.6% today. What happens if An AI hallucination incorrectly states a specific staff member was fired for theft. and that information gets out.
The other concern is that an AI can be very good at finding other information on a person, for example For example, a pharmacy collecting a patient's email for a receipt cannot legally dump email addresses into an AI tool to predict their future medical purchases. The Privacy Act restricts businesses from using data for secondary purposes without consent or an exception.
About business Confidential Business Information?
Confidential business information may include pricing policies, supplier terms, internal reports, and other factors that drive your commercial advantage. This data can create a serious risk when it leaves the business. For example, I remember a newsagent who was very upset when he discovered that a report showing his seasonal greeting card markup by supplier was given to another shop nearby.
What about using AI to prepare an email to a supplier, asking for an extension to pay because they do not have the money this week, and then it ends up with another supplier of theirs?
Shadow AI
Most people have one AI account for business. They then share it with everyone to use. We call this a Shadow AI account. In practice, it means everyone can see the information.
It may get worse, as many people today have smartphones and use them, which means your staff member has this confidential information stored on their AI account. I have no idea how to handle that problem.
How Long Do Online AI Providers Keep Your Information?
There are many pluses to storing this information for a long time. For example, in the above example, where a merchant is writing to a supplier for an extra month's credit, the merchant may need to refer to the letter in a few weeks. So you want it to stay as long as possible. Most suppliers claim they can keep it for 60 days, but I read that their internal logs retain it much longer. We do know from a case in the US that even after the user deleted the information, it was still stored officially for training purposes.*
We do know that AI companies do analyse your messages, not just for training purposes but also for some illegal activity such as paedophilia. How deeply they go, I do not know. Still, I remember how, a few years ago, people were complaining that Google Gemini was becoming unusable because it was so politically correct. The AI refused to label a drink by a hot chill dish by its name in a restaurant. What the AI companies do with the information they flag, I am not sure. It would be nice to know.
Can Using AI in a Legal Dispute Damage Confidentiality or Privilege?
Short answer: YES.
The police or courts can demand this information from you or the AI company. If you use an AI company under US law, they will have no problem getting it. If you use, say, a Chinese AI company, it may not be so easy for them to get it.
If you want to know more, check out the Federal Court of Australia, which published its Generative Artificial Intelligence Practice Note, GPN-AI, on 16 April 2026. This document sets clear expectations regarding the responsible use of AI during proceedings.
There is no problem in a judge ordering a retailer to disclose exactly which AI software they used to summarise thousands of pages of contested supplier invoices and to demand a copy.
Why Does Local AI Appeal to Privacy-Conscious Businesspeople?
The first point is that it limits the risk of third-party access to the data. Today, about 80% of all AI requests in large organisations go through their local AI. It gives the organisation direct ownership of its security and usage. It often allows you to know who asked and when.
AI Policy statement
Here is one I wrote; feel free to use it or modify it as you require.
Artificial Intelligence (AI) Acceptable Use Policy
- Policy Purpose
This document defines how we may use artificial intelligence tools to improve efficiency while protecting customer privacy and commercial confidentiality. It establishes well-defined guidelines for tool selection, information handling, accuracy verification, and incident reporting. The policy will be reviewed to ensure compliance with current technical progress and regulatory requirements. - Scope of Policy
It applies to all full-time employees, casual staff, contractors, and temporary personnel. It covers AI usage on company-owned devices, shop-floor tablets, cloud workstations, and personal devices used for work. - Approved Tools and Account Access
Staff must exclusively use AI platforms authorised by management. The IT department maintains a register of approved tools with defined security certifications. Single sign-on credentials are required for all licensed accounts. Unapproved public applications need management approval first. Use of tools falling below minimum security thresholds will stop immediately. - Protecting Point of Sale Data
Extreme caution must be exercised when exporting information from our organisation. This includes such things as raw transactional records, customer loyalty databases, and end-of-day financial summaries must never be uploaded to unapproved public platforms without prior sanitation. Personally identifiable information should be masked before export. - Human Review and Accuracy
Artificial intelligence models frequently generate plausible but incorrect outputs, a process identified as hallucination. Each employee remains fully accountable for the accuracy of machine-assisted work products before the submission. We ask that if in doubt, you see management before release. - Incident Reporting and Consequences
Any accidental data exposure involving AI platforms are required to be reported to the manager on discovery. Rapid reporting enables immediate containment, including session termination, cloud cache deletion, and customer notification if required.
Regards
Manager
Conclusion
An important AI question for a businessperson is who controls the information after it's entered into an AI. You must ensure your operational data remains secure. I suggest, for both cost and security, that you consider Local AI if possible. I discussed deployment of Local AI here.
Written by:

Bernard Zimmermann is the founding director of POS Solutions, a leading point-of-sale system company with 45 years of industry experience, now retired and seeking new opportunities. He consults with various organisations, from small businesses to large retailers and government institutions. Bernard is passionate about helping companies optimise their operations through innovative POS technology and enabling seamless customer experiences through effective software solutions.

