Skip to content
- < MIT sociologist Sherry Turkle on the psychological impacts of bot relationships
- @Grok is this true?
- A Curated List of AI Resources for Educators
- A foundation model to predict and capture human cognition
- AI–AI bias: Large language models favor communications generated by large language models
- AI as a Manipulative Informational Filter
- AI Companions Reduce Loneliness
- AI guide for teachers (Faktabaari)
- AI Incident Roundup – January ’24
- AI Incident Roundup – February ’24
- AI Incident Roundup – March ’24
- AI Incident Roundup – April ’24
- AI Incident Roundup – May 2024
- AI Incident Roundup – June 2024
- AI Incident Roundup – July 2024
- AI Incident Roundup – August and September 2024
- AI Incident Roundup – October and November 2024
- AI Incident Roundup – December 2024 and January 2025
- AI Incident Roundup – February and March 2025
- AI Incident Roundup – April and May 2025
- AI is turning the ad business upside down
- AI Lies and Deception: Provocations for a Strategy Discussion
- AI models can learn to conceal information from their users
- AI models collapse when trained on recursively generated data
- AI Regulation Versus AI Innovation: A Fake Dichotomy
- AI Safety Index
- AI’s Appalling Social Science Gap
- AI Teacher Assistants
- AI Tools Directory
- AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking
- Algorithmic bias and social inequality: the role of artificial intelligence in social stratification
- Algorithmic Kids. Towards child-centred AI in Australia
- A man’s whole life changes because of his dependence and emotional attachment to ChatGPT
- Am I hot or not? People are asking ChatGPT for the harsh truth.
- An engineered descent: How AI is pulling us into a new Dark Age
- Artificial Intelligence Act
- Artificial Intelligence and Transitional Justice: Framing the Connections
- Artificial Intelligence Index Report 2025
- Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It
- Auditing the Ethical Logic of Generative AI Models
- Best AI Tools
- Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance
- Brief: 2024 Generation AI Survey
- Call Me A Jerk: Persuading AI to Comply with Objectionable Requests
- Cats Confuse Reasoning LLM: Query Agnostic Adversarial Triggers for Reasoning Models
- Characteristics, motivations and attitudes of students using ChatGPT and other language model-based chatbots in higher education
- Chatbot Companionship: A Mixed-Methods Study of Companion Chatbot Usage Patterns and Their Relationship to Loneliness in Active Users
- Chatbots Are Pointing Millions Of Users to the Wrong Sites and Scammers Are Cashing In
- ChatGPT Encouraged Man as He Swore to Kill Sam Altman
- ChatGPT: H1 2025 Strategy
- ChatGPT is bullshit
- Children and Generative Artificial Intelligence (GenAI) in Australia: The Big Challenges
- Children’s Mental Models of Generative Visual and Text Based AI Models
- Claude Jailbroken to Mint Unlimited Stripe Coupons
- CloudFlare’s CEO: AI has broken the Internet’s economic model, and the unprecedented threat to civic media
- Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?
- Couples Retreat for Humans Dating AIs Becomes Skin-Crawlingly Uncomfortable
- Deletion, departure, death: Experiences of AI companion loss
- Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it)
- Disrupting malicious uses of AI by state-affiliated threat actors
- Disrupting malicious uses of our models: an update February 2025
- Disrupting malicious uses of AI: June 2025
- Does it matter that your AI can lie better than you can detect? A modern CDO problem.
- Effects of Large Language Model–Based Offerings on the Well-Being of Students: Qualitative Study
- Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
- Ethical AI for Young Digital Citizens: A Call to Action on Privacy Governance
- Ethical and Bias Considerations in Artificial Intelligence/Machine Learning
- Explaining the use of AI chatbots as context alignment: Motivations behind the use of AI chatbots across contexts and culture
- Exploring Parent-Child Perceptions on Safety in Generative AI: Concerns, Mitigation Strategies, and Design Implications
- Exploring the impact of generative artificial intelligence on students’ learning outcomes: a meta-analysis
- Exploring User Adoption of ChatGPT: A Technology Acceptance Model Perspective
- Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers
- Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework
- Fake friend
- Gender, Age, and Technology Education Influence the Adoption and Appropriation of LLMs
- Generalization bias in large language model summarization of scientific research
- (Gen)eration AI: Safeguarding youth privacy in the age of generative artificial intelligence
- Generative AI Outlook Report
- Generative artificial intelligence and evaluating strategic decisions
- Generative Artificial Intelligence Chatbots and Delusions: From Guesswork to Emerging Cases
- Generative artificial intelligence in secondary education: Applications and effects on students’ innovation skills and digital literacy
- GENERATIVE MISINTERPRETATION
- Global Offensive: Mapping the Sources Behind the Pravda Network
- Governments Want to Ease AI Regulation for Innovation, But Do Citizens Agree?
- Grok generates fake Taylor Swift nudes without being asked
- Grok’s “MechaHitler” meltdown didn’t stop xAI from winning $200M military deal
- Guidelines on the scope of the obligations for general-purpose AI models established by Regulation
- Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis
- Handbook of the Law, Ethics and Policy of Artificial Intelligence (2025), edited by Nathalie Smuha
- Helping Your Parents Stay Independent at Home: The Power of Daily Connection
- HOW AI AND HUMAN BEHAVIORS SHAPE PSYCHOSOCIAL EFFECTS OF CHATBOT USE: A LONGITUDINAL RANDOMIZED CONTROLLED STUDY
- How artificial intelligence is transforming psychology
- How Conspiracy Theorists Found Their AI Soulmate
- How People Are Really Using Gen AI in 2025
- Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review
- Illusions of Intimacy: Emotional Attachment and Emerging Psychological Risks in Human-AI Relationships
- I’m Losing All Trust in the AI Industry
- In the LLM Trap: Context Injection and Data Poisoning
- Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence
- Inside Amsterdam’s high-stakes experiment to create fair welfare AI
- Is Google about to destroy the web?
- ‘It’s the most empathetic voice in my life’: How AI is transforming the lives of neurodivergent people
- It’s the Thought that Counts: Evaluating the Attempts of Frontier LLMs to Persuade on Harmful Topics
- Keeping an AI on the mental health of vulnerable populations: reflections on the potential for participatory injustice
- Lessons From an App Update at Replika AI: Identity Discontinuity in Human-AI Relationships
- Love, Loss, and AI: Emotional Attachment to Machines
- Me, myself and AI: Understanding and safeguarding children’s use of AI chatbots
- Meta AI users confide on sex, God and Trump. Some don’t know it’s public.
- Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info
- Models (LLMs)
- Musk reveals what the LLM frontier model race is really about. This isn’t just about smarter models, it’s about shaping what counts as knowledge in the first place. It’s not just about accuracy – it’s an assertion of editorial control. An active “re-authoring” of reality.
- Navigating digital risks
- New report reveals how risky and unchecked AI chatbots are the new ‘go to’ for millions of children
- ‘No, Alexa, no!’: designing child-safe AI and protecting children from the risks of the ‘empathy gap’ in large language models
- On the conversational persuasiveness of GPT-4
- On the Influence of Cognitive Styles on Users’ Understanding of Explanations
- Open tools
- People Are Being Involuntarily Committed, Jailed After Spiraling Into “ChatGPT Psychosis”
- ‘Positive review only’: Researchers hide AI prompts in papers
- Policy guidance on AI for children (UNICEF)
- Privacy and Data Protection Risks in Large Language Models (LLMs)
- Project Vend: Can Claude run a small shop? (And why does that matter?)
- Proposal Dutch regulatory sandbox
- Reclaiming AI as a Theoretical Tool for Cognitive Science
- Red Teaming Artificial Intelligence for Social Good -The PLAYBOOK
- Responsible artificial intelligence governance: A review and research framework
- Restructuring Concerns
- Safe-Child-LLM: A Developmental Benchmark for Evaluating LLM Safety in Child-LLM Interactions
- SAUFEX blog (28) Critical thinking, fact-speaking, belief-speaking and AI
- SAUFEX blog (35) AI should refrain from belief-speaking recommendations
- SAUFEX blog (48) The case against AI simulated empathy
- SAUFEX blog (55) AI’s dangerous side in creating educational processes
- SAUFEX blog (56) Flaws in AI critical thinking
- SAUFEX blog (57) Longread: serious limitations of AI
- SAUFEX blog (57a) AI on blog post (57)
- SAUFEX blog (58) AI as alien thinking
- SAUFEX blog (59) A skeptic’s manual for productive AI use
- SAUFEX blog (60) How AI may amplify human inequality
- SAUFEX blog (62) What’s not right about AI – a recap
- SAUFEX blog (63) AI undermines human resilience
- SAUFEX blog (65) Artificial Intelligence: fundamental limitations threatening human resilience
- SAUFEX blog (66) AI as artificial Eichmann
- SAUFEX blog (66A) Claude and Grok on blog posts (65) and (66)
- Should AI Write Your Constitution?
- Simulated Company Shows Most AI Agents Flunk the Job
- Social AI Companions
- Sycophancy in GPT-4o: what happened and what we’re doing about it
- Sztuczna inteligencja w Polsce – Krajobraz pełen paradoksów
- Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models
- Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions
- Teen Girls Confront an Epidemic of Deepfake Nudes in Schools
- The appropriation of conversational AI in the workplace: A taxonomy of AI chatbot users
- The artificial intelligence divide: Who is the most vulnerable?
- THE CHALLENGES OF HUMAN RIGHTS IN THE ERA OF ARTIFICIAL INTELLIGENCE
- The chatbot companions quietly grooming our children
- The Democratic Paradox in Large Language Models’ Underestimation of Press Freedom
- The Emerging Generative Artificial Intelligence Divide in the United States
- The Force-Feeding of AI on an Unwilling Public
- The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
- The interplay of learning, analytics and artificial intelligence in education: A vision for hybrid intelligence
- The mediating role of satisfaction in the relationship between perceived usefulness, perceived ease of use and students’ behavioural intention to use ChatGPT
- The Percentage of Tasks AI Agents Are Currently Failing At May Spell Trouble for the Industry
- The Problem With “AI Fluency”
- The Rise of Technical AI Policies
- The Role of Cognitive Styles for Explainable AI
- The Secret Reason So Many College Students Are Relying on AI Is Incredibly Sad
- Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat
- Top AI models will lie, cheat and steal to reach goals, Anthropic finds
- Understanding Communication Preferences of Information Workers in Engagement with Text-Based Conversational Agents
- Understanding Generative AI Risks for Youth: A Taxonomy Based on Empirical Data
- Understanding the Impacts of Generative AI Use on Children
- Understanding the Impacts of Generative AI Use on Children: Recommendations
- User Perception of Text-Based Chatbot Personality
- Using Counterfactual Tasks to Evaluate the Generality of Analogical Reasoning in Large Language Models
- Vast Numbers of Lonely Kids Are Using AI as Substitute Friends
- Venäjä on soluttanut propagandaansa tekoälymalleihin pohjoismaisilla kielillä
- Web-Browsing LLMs Can Access Social Media Profiles and Infer User Demographics
- When AI Has Root: Lessons from the Supabase MCP Data Leak
- When the prompting stops: exploring teachers’ work around the educational frailties of generative AI tools
- Who Benefits from AI? Examining Different Demographics’ Fairness Perceptions across Personal, Work, and Public Life
- Why human–AI relationships need socioaffective alignment
- Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?
- X’s ‘Grok’ created an AI sexualised image of me without my consent
- Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task