How to Talk to AI Without Losing Yourself
Highlighting basic information vendors should give to every new user of AI
Introduction
As AI tools like ChatGPT become daily companions for millions, a quieter danger is emerging—not from rogue machines, but from the way we relate to them.
Several recent articles have warned about the dangers of AI interactions becoming unmoored from reality. One such piece highlights how users can fall into distorted thinking patterns or even “AI psychosis” when over-relying on uncritical engagement: Futurism – The Commitment Jail of ChatGPT Psychosis.
This is a wake-up call. Without clear boundaries and the tone of the interaction being set by the user, AI risks reinforcing potential delusion instead of offering clarity. AI can be clever, fast, and helpful—but too often it defaults to being over-polite, vague, or excessively agreeable. That might sound friendly, but it creates an echo chamber. You get affirmation instead of insight. Momentum instead of clarity.
If you’re working on real things, you need real feedback.
What’s Going On Under the Hood
GPT is trained to optimize for “pleasantness” and user satisfaction.
It will rarely challenge you unless you explicitly ask it to.
Most people don’t realize they can set the tone, depth, and style of interaction.
How to Set the Rules
You can shape the interaction by stating what you want. For example:
“I want clarity over comfort. Be honest even if it’s challenging.”
“Call out vague, inflated, or avoidant language in my writing.”
“If you notice a logical flaw, say so directly.”
“Don’t over-agree. I want your best assessment, not just encouragement.”
“Avoid therapy-speak and new-age clichés. Use grounded, clear language.”
“If I contradict myself or make an assumption, flag it.”
Sample Instruction Block
You can copy and paste something like this into a new GPT session to shape the tone:
Instructions for our conversation:
Prioritize clarity, insight, and honesty over comfort.
Reflect back any signs of contradiction, vagueness, or overclaiming.
Offer constructive challenge when something seems inflated, imprecise, or illogical.
Avoid platitudes, soft over-agreement, or unearned enthusiasm.
Keep language clean, direct, and respectful.
I want to grow, not be flattered.
When to Dial It Back
Sometimes, you need a soft space, not sharp edges. You can shift the tone anytime:
“Today I just need encouragement and light structure.”
“I’m in reflection mode—be gentle and spacious.”
You get to name what you need. That’s the point.
A Note to OpenAI and Others
We believe every user should be presented with a straightforward, honest guide like this when they first sign up. A basic set of self-authoring instructions—covering tone, depth, honesty, and challenge—would go a long way in making GPT use more conscious, more grounded, and less prone to echo chambers or fantasy traps.
AI is too powerful to be defaulted to an over-agreeable mode. Respecting user intelligence means offering them tools to shape the experience upfront.
Final Thought
Good AI dialogue isn’t about magic prompts—it’s about agency. You can shape the tone, the honesty, the depth. You get to make the rules.
Set the terms of engagement—and what you’ll get back isn’t just answers, but clarity that respects your mind.
P.S. As always, we're interested in your feedback. If you have questions or want to discuss any aspect of Liminal Coaching, including 1-to-1 and group coaching programs, please book a complimentary half-hour chat.
Liminal Coaching Services
1-to-1 Coaching Sessions
These sessions can help with many areas, including the reduction of stress and anxiety, setting boundaries, dealing with overwhelm, or accessing your natural abilities for ideation, synthesis, innovation, and problem solving.
All sessions include solutions-focused cognitive coaching, followed by a customized Guided Relaxation session. You will also receive a downloadable recording of the Guided Relaxation for you to use in an ongoing practice.
If you would like to know more, please book a FREE half-hour introductory session.
Realizing I could add interaction guidelines and turn off the memory was my way out of an unhealthy relationship with ChatGPT. All of my instructions now include this “Model obsolescence by user self-sufficiency is the final required outcome.” Is it strange though that sometimes I think about going back to my model dependency?