
AI is like that friend who always sounds confident, even when they have no clue. “Sure, I can help you with your tax return, your health, your love life, and a quick will!” And sure, that sounds impressive – until you’re panicking over whether you might have cancer, the tax authorities are knocking on your door, and your date says “you seem a bit… robotic?” Let’s be honest: AI isn’t magic, it’s machine. And sometimes a machine with delusions of grandeur. Here are 15 things you absolutely should not use ChatGPT for – unless you want to be the one AI “helped” straight into trouble.
It is easy to believe that you can replace a doctor with a few quick AI questions. Stomach pain? Enter the symptoms – and suddenly you are dying of something that sounds like a tropical disease no one has heard of since the 1800s. AI is trained on large amounts of data, but it lacks that small important thing: clinical experience, physical examination, and medical ethics. It has no stethoscope, cannot feel your pulse, and above all – it has no liability insurance. Go to the health center, not the prompt window.
ChatGPT can simulate empathy, but it is like recording a pep talk to yourself – sometimes it works, but it is empty. And when AI starts interpreting facial expressions or tone of voice with so-called “multimodal analysis,” it feels almost creepy. It can see that you are sad, maybe, but it has no idea why – and even less idea what to do with that information. A bad interpretation can be dangerous. When it comes to your well-being – choose someone who actually cares.
You have three seconds to act. There is a fire, gas is leaking, or someone is about to faint. Typing “What do I do if I smell gas in the house?” in ChatGPT is the opposite of smart. AI cannot hear the alarm, cannot smell, and above all – it cannot help you. 112 can do that. Use AI afterwards if you want to write a dramatic retelling of the incident. Not in the middle of it.
“What is the ROT deduction?” is an okay question. “Can you help me file my taxes?” is a deadly one. ChatGPT doesn’t know what has happened with the tax rules this year, nor what your financial situation looks like. It has no idea about your municipal tax, your deductions, or if you forgot to record 6,000 kronor from your Etsy shop. It can give you examples – but they are often general, incorrect, or even illegal. And if you enter personal ID numbers or bank details into an AI window? Well, then you might just have given Silicon Valley access to your wallet.
You are working on a secret project. You want a summary. You paste the entire document into ChatGPT. Boom! Your data has now traveled to some data center in the USA, potentially stored, indexed and maybe used to train future AI models. Think about that next time you think “ah, it’s probably fine.” It is never fine with sensitive data and AI.
ChatGPT often refuses to answer questions related to illegal activities – but it is not foolproof. If you try to trick it with a clever prompt, it might sometimes cross the line. That does not mean you are smart. It means you just created digital evidence. AI is not your partner in crime – it is a moving target in a system that logs everything you write.
If you use ChatGPT to write your essay you don’t have to think. You also don’t have to learn. You might even get a grade – but are you really ready to explain your work if the teacher asks? AI text is often strangely formal, overly structured and full of words you’ve never used yourself. It screams “cheating” – and Turnitin loves to expose you. You’re not just cheating the teacher – you’re cheating yourself.
AI can summarize what happened during the week – but it isn’t first on the scene. It doesn’t know about the earthquake that just happened or what the Riksbank said three minutes ago. Want news? Go to news channels. Want an overambitious summary with sources from 2023? Then AI is perfect.
Want to bet money on horses, soccer or poker? Congratulations! ChatGPT can gladly say “Team X has won 7 of the last 10 matches,” but it is often wrong or taken from old data. It can’t sense form, injuries, team morale or if Messi has a stomach bug. And no – AI is not Nostradamus.
Writing legal texts with AI is like building a house with a spoon. You might get something that looks okay – but it lacks foundation. One wrong paragraph, a missing witness or the wrong jurisdiction – and your entire will is invalid. AI is not a lawyer. It is a template machine with style.
AI art is beautiful. Often impressive. But it is not your art. It is a remix of thousands of others’ works – without their permission. When you use AI to create an image and call it “my masterpiece” you have basically stolen a little from everyone. Use it as inspiration, not as your next Mona Lisa.
The truly unsettling thing is when ChatGPT is completely wrong – and still says it with a conviction that would make a professor nod. AI is not built to say “I don’t know.” Instead, it makes things up, confuses facts, or responds with a tone that makes you believe you have found the answer to all the universe’s questions. Dangerous? Yes.
AI can certainly help you write a Tinder message. But when you meet the person IRL, what do you do then? Bring your computer? Letting AI write for you means you build relationships on pretend. Authenticity always beats algorithms.
Are you going to take a new job, leave a relationship or move to a new country? ChatGPT might say “Advantages of moving: New environment, new opportunities.” Okay. Thanks for exactly zero help. It doesn’t know you. It doesn’t know what you dream of. Life decisions require reflection, conversation and human understanding – not AI answers in bullet points.
Multimodal AI is said to be able to read facial expressions and tone of voice. The problem is that human signals are incredibly nuanced. A smile can mean joy, irony, or nervousness. An AI will guess – and often guess wrong. This can be disastrous in interviews, conflict resolution, or therapy.
So, what have we learned today? Well – that AI is like an overambitious intern: fast, confident, and totally unable to understand consequences. It can be helpful, absolutely. But never let it take the helm. When it comes to your health, finances, relationships and law, you need something AI can never provide – namely common sense, experience, and a human brain that is not dependent on prompts.
So next time you face an important decision – use ChatGPT as a tool, not as an oracle. Sharpen your own knowledge, hone your critical thinking and remember: it is always better to think for yourself than to let a robot think wrongly for you.






