Tired of people throwing around terms like LLM, prompt injection and AGI as if it were everyday talk? Relax – here is the guide that explains the most important AI terms in English but with clear Swedish explanations, all categorized. A perfect mix of geekiness and clarity. M
🧠 General about AI
- AGI (Artificial General Intelligence) – An AI that can perform any intellectual task, like a human.
- AI (Artificial Intelligence) – Technology where machines simulate human intelligence.
- Artificial life – AI models that simulate biological life and evolution.
- Autonomy – The ability of AI to act without human intervention.
- Bias – Built-in prejudices in AI systems caused by skewed data.
- Black box – When it is impossible to understand how AI arrived at a decision.
- Closed-source – Software or AI model whose code is not available to the public.
- Cognitive computing – Systems that attempt to mimic human thinking.
- Ethics – Moral considerations regarding the use of AI.
- Explainability – How easy it is to understand how AI reasons.
- Hallucination – When AI fabricates false information.
- Intelligence explosion – A hypothesis that AI can improve itself exponentially.
- Knowledge representation – How AI organizes and stores information.
- Model interpretability – How easy a model is to understand and explain.
- Narrow AI – An AI with expertise within a limited domain.
- Open-source – Software that is freely available and modifiable.
- Singularity – A hypothetical point where AI develops faster than we can control.
📚 Träning och inlärning
- Backpropagation – AI’s way to correct itself during training.
- Batch size – How much data is used per training step.
- Bias-variance tradeoff – Balance between too much and too little learning.
- Cross-validation – Technique for testing AI models on multiple data splits.
- Data augmentation – Artificially increasing the amount of training data, for example by rotating images.
- Dataset – An organized set of data used for training.
- Epoch – One pass through the entire dataset during training.
- Few-shot learning – AI that learns from only a few examples.
- Fine-tuning – Training an existing model with new data.
- Gradient descent – A method to minimize errors during training.
- Hyperparameter – Predefined values that control the AI’s learning.
- Loss function – A measure of how wrong the AI model is during training.
- Mini-batch – A small subset of data used in training.
- Overfitting – When the AI memorizes the training but cannot generalize.
- Pretraining – Initial training before the model is used practically.
- Reinforcement learning – AI that learns through rewards and punishments.
- Supervised learning – Training with answers where the AI gets the correct results.
- Training data – Data used to teach AI to perform correctly.
- Underfitting – When the AI has not learned enough to perform well.
- Zero-shot learning – AI handles tasks it has never seen before.
💬 Language and comprehension (NLP)
- Attention mechanism – A technique where AI focuses on relevant parts of the text.
- Autoencoder – A model that learns to represent data efficiently.
- BERT – AI model that reads text in both directions for better understanding.
- BLEU score – A measure to assess the quality of translated text.
- Context window – The amount of text AI can hold in memory.
- Coreference resolution – When AI understands which words refer to the same thing.
- Inference – AI’s process of drawing conclusions based on its knowledge.
- LLM (Large Language Model) – A very large language model trained on massive amounts of text.
- Named Entity Recognition (NER) – When AI identifies names, places, dates etc. in text.
- Natural language processing (NLP) – Technology that enables AI to understand and generate language.
- POS tagging – Labeling parts of speech (noun, verb etc.).
- Prompt engineering – The art of formulating good AI questions.
- Prompt injection – Tricking AI by hiding instructions in text.
- Semantic analysis – Understanding the meaning of text, not just the words.
- Semantic search – Searching for meaning rather than exact words.
- Sentiment analysis – Analyzing whether the text is positive, negative, or neutral.
- Token – A small unit of text that AI processes.
- Transformer – AI architecture that makes language models efficient and context-aware.
🎨 AI-generated content
- 3D generation – AI that creates three-dimensional objects or environments.
- Audio synthesis – AI-generated music, voice, or sound.
- Diffusion model – Model that creates images by gradually removing noise.
- GAN – Two AI models competing to create realistic content.
- Generative AI – AI that creates new content (text, image, music).
- Image generation – AI that creates images from, for example, text commands.
- Multimodal AI – AI that handles multiple data types simultaneously.
- Neural style transfer – AI transfers the style of one image to another.
- StyleGAN – A specific GAN model used to create faces.
- Text-to-image – Technology where text descriptions are converted into images.
🤖 Interaction with users
- Chatbot – An AI system you can chat with.
- Conversational AI – AI that maintains coherent conversations.
- Dialogue management – Technology that controls how AI handles a conversation.
- Embodied AI – AI integrated into robots or physical devices.
- Multiturn conversation – AI that remembers and responds over multiple messages.
- Speech recognition – AI that converts speech to text.
- Turing test – A test where AI tries to appear human.
🚨 Safety and risks
- Adversarial attack – Tricking AI with manipulated input.
- Alignment – Ensuring AI’s goals and behavior align with human values.
- Anomaly detection – AI detects unusual or dangerous patterns.
- Deepfake – AI-generated content that makes it look like someone did or said something they did not.
- Explainable AI (XAI) – AI designed to be understandable.
- Kill switch – The ability to shut down AI in dangerous situations.
- Model collapse – When an AI loses capability due to poor training.
- Red teaming – Security testing where AI is exposed to attacks.
- Safety constraints – Restrictions that prevent harmful AI behavior.
- Synthetic data – Fake but realistic data, used for training without violating privacy.