Below is a short guide to help you understand how to use AI responsibly and what to be careful about.
Written and checked by Mr Mueller.
If you ask AI a general question, it will give you a general answer — often mixed with guesses. To get useful results, you should upload the correct documents or provide full information. You can also ask AI to assess the quality of the documents you share. Without facts, it will simply “fill the gaps” and create stories that sound right but are wrong. YES, if you have an opinion, yes, the AI will agree with you.
The customer is always right. Right? → Except — when you are not.
AI can write, translate, and check grammar, but it cannot reason like a human. It doesn’t truly understand what it says. That means it can produce confident-sounding answers that are completely false — in short,
-> AI can and will lie and not realise it.
Each AI company builds its AI to answer differently. Some systems are “trained”, told/programmed to avoid certain topics or to favour certain opinions based on company policies.
These rules can change anytime, which means AI responses may differ depending on the year you ask and the platform you use.
Free AI tools often guess more, share user data, and give less reliable answers. Paid models usually analyze more information before responding, provide better-sourced facts, and protect privacy better — though NO AI system is 100% private.
SENBOX uses AI for simple, supportive tasks such as spell-checking, translation, and creating illustrations. We regularly test AI reasoning to understand its limits. We NEVER use AI to write reports, create lesson plans, or conduct assessments by AI. All information is verified by our team using real classroom evidence.
SENBOX uses Google for Education tools, which protect children’s data and comply with privacy standards. SENBOX also builds our own software to help teachers write better reports — keeping control within our organisation and not with external AI companies.
AI can support learning if used thoughtfully. Here are a few simple guidelines:
✅ Use AI as a helper, not a leader. Ask it to explain ideas or check facts.
✅ Keep learning personally. Consider uploading your child’s real school information for accurate support.
✅ Ask AI for sources. Reliable AI should tell you where information comes from.
❌ NEVER rely on AI for educational or behavioural advice. Every child is different — only teachers and professionals can make accurate recommendations.
✅ Discuss results together. Use AI responses as conversation starters to help think critically, not as final answers.
Bonus tip: Take a picture of these two pages. Ask your AI:
Is this text written by AI?
Compare services & approaches
“List the differences between Centre A and Centre B for ASD support (staff ratio, evidence-based methods, data tracking, parent training). Make a comparison table.”
“What should I look for during a school visit for a child with language delays?”
Draft questions for professionals
“Generate 10 precise questions to ask a developmental paediatrician about attention and self-regulation.”
“Help me prepare IEP meeting questions about goals, supports, data collection, and review dates.”
Parents’ rights & child protection (general guidance)
“Summarise typical parent rights in school meetings and what ‘informed consent’ means. Answer for [country].”
“Explain what a child-protection policy usually includes and red flags to watch for.”
Explain school policies in plain language
“Translate this behaviour policy into plain English/Vietnamese and list daily-routine implications. Quote exact sections and numbers for any rules.”
Check whether a recommended method is evidence-supported
“Summarise the current evidence for [method]. Cite systematic reviews or national guidelines, and note limitations.”
Upload reports for clarity checks (only if authorised; anonymise and remove metadata/images)
“Review this anonymised report and highlight unclear terms, missing data, or inconsistencies.”
“Extract goals, baselines, and progress measures from this IEP into a checklist (do not rewrite goals).”
Explain conditions/terminology
“Define ‘sensory modulation’ and list classroom supports with citations.”
Limit: AI cannot judge your child’s needs without real data.
Diagnosis or treatment plans
Don’t ask: “Does my child have ADHD?” / “Tell me how to treat my child’s behaviour.”
Safer alternative: “Draft neutral questions for my child’s clinician/teacher about attention, sleep, and classroom impact.”
Evidence check: “Summarise evidence for [method] with citations; list uncertainties to discuss with professionals.”
Risk or safeguarding decisions
Don’t ask AI to decide if something is abuse or a crisis. Contact professionals/emergency services.
Overriding school/clinician advice
Do not use AI to argue medical or legal points without professional review.
Anonymise documents (names, contacts, IDs, faces) and remove metadata; share only if you have the right to do so.
Demand transparency: “Cite sources and list uncertainties. If information is insufficient, say so—do not guess.”
Be specific (age, grade, goal area) but avoid identifiers.
Use AI as a prep tool, not a decider; bring AI-drafted questions to your clinician/teacher.
Localise: “Answer for Vietnam and note if any law items may be outdated.”
Important note on AI reliability
AI systems can refuse or say ‘I don’t know’, but they may produce an answer when uncertain.
Reduce this risk by instructing: “If uncertain, say ‘insufficient information’ and stop. Do not guess. Provide sources.”
Briefly explain that SENBOX follows strict data protection procedures, uses Google for Education accounts, and never uploads private student data into public AI systems.
Why add this: Parents want reassurance. Transparency about data privacy builds trust and shows professional standards.
Read more...
Explain that parents should always contact teachers when they need guidance about:
• Learning difficulties
• Behavioural or emotional concerns
• Therapy or progress reports
• Placement or school readiness decisions.
Why add this: It helps parents understand where AI stops and professional judgment begins — keeping teachers central.
In short closing message:
“SENBOX believes that AI, when guided by professional ethics, can help us reduce teacher workload, increase creativity, and make learning more inclusive. But it must always serve humans — never replace them.”
Why add this: It closes positively, showing SENBOX is forward-thinking yet ethical, not anti-technology — this improves your reputation and public image.
🔹 In this example, the AI said “teacher” because it is unaware that SENBOX has an Education Consultation Team, which is a very exclusive service that most centres/schools do not have access to. If you tell your AI about all the services SENBOX provides, it will give the most valuable recommendation. Otherwise, it assumes that you are teaching your child at home or in a mainstream classroom.
✅ Use AI as a teaching assistant, not a teacher. Ask AI to suggest ideas, create visual supports, or check grammar — but always review and adjust before using with students.
✅ Stay within SENBOX accounts. Only use AI tools when logged in with your SENBOX teacher email to ensure student information is protected.
✅ Never input personal or student data. Do not share names, photos, or reports with AI tools. Keep all identifiable information private.
✅ Ask AI for verified sources. Reliable AI tools should provide references for information used in teaching or training.
✅ Use AI results as collaboration tools. Treat AI outputs as starting points for discussion and creative planning, not as final answers.
❌ Do not use AI for student assessment or behavioural advice. Each child’s needs are unique — rely on SENBOX frameworks, IEPs, and professional team input for accurate decisions.