How to learn a new skill using AI without giving you the full solution right away? Which LLM to use?
In a discussion forum with @monatruong_murror , we talked about how AI can help us learn things that aren’t naturally familiar to us, like programming.
The biggest challenge was/is:
Getting AI to guide you toward a solution, instead of just giving you the answer.
This problem has two sides:
– As a beginner, you don’t know how to clearly define what you need.
– AI either gives incomplete answers or jumps straight to full solutions because the instructions aren’t precise enough
At the moment, I try to approach it this way:
I research tools or projects that I want mine to be similar to
I use prompts like: “I’m a beginner, explain this to me like I’m 5 years old”
“Don’t show me the final solution, guide me step by step”
“How should I approach this and where should I look?”
“Show me some of the best examples of how to do this”
The issue is that AI (e.g. Anthropic’s Claude) still tends to slip into generating full code, which completely disrupts the learning process and understanding.
So my question is:
How would you use AI in a way that actually guides your thinking instead of replacing it?
And one more thing:
Maybe it’s also about the model choice. Would something like OpenAI Codex or Perplexity be better, or is it less about the tool and more about how you ask?


Replies
I think it comes down to the tool, but it also comes down to your approach as in, how involved do you want to be? For example, do you want to look at every line for code and have the tool explain what every line does and why it does it. Or do you not care about the code and are stepping back to look at the bigger picture of what you are build and why? I personally don't need to look at any code, I don't care about it enough to warrant a line by line investigation. My approach is is the code doing what it suppose to do, is it being written in the practiced standard for they language being used? Is it using at minimal code security practices and is it secure in general. when I look at my projects from this perspective saying things like "explain this to me like I am 5" isn't necessary as I have explained what I wanted in detail in the prompts in .md files. I will have AI look at it and ask me questions on it's understanding of what I want and am looking for. This approach allows me hash out something I may not have thought of or over looked.
Just my .02$
minimalist phone: creating folders
@david_sherer I would prefer to know what exact part does, because I can replicate my knowledge and apply it to the next project, the knowledge will be accumulating.
@busmark_w_nika I can understand that, I just look at it from a different angle. For example, if I have 3 website and each website has the same function across each, like a contact form just reuse the code already built and proven and I understand how it works and integrate with my M365 App permissions etc.. I get it. My approach is what was the AI prompt that I use to create that form and use that prompt. AI may used different coding to get the same results and I am okay with that. Now I also go through for security checks and leakage which helps substantiate the code. As or the M365 app resource so the webpage can email at, as long as the code points to my M365 i do not have to change anything on that.
minimalist phone: creating folders
@david_sherer Of course, I can copy and paste what I already did before, tho I would like to repeat it. When I did it half a year ago, I barely remember the process when I am not doing things repetitively :D
minimalist phone: creating folders
@howell4change I actually want to build a plugin first. I need a motivation(certain product to start)
I believe this challenge can be solved by imposing strict constraints on the LLM, ensuring it complies without deviation.
The key lies in how we structure it, much like learning apps that guide users through a set journey, or why people buy structured courses.
Instead of random exploration, instruct the LLM to create a core curriculum divided into beginner, intermediate, and advanced levels. This enforces a progressive order, building skills step by step.
minimalist phone: creating folders
@rohanrecommends That's a good point, I usually just say "I am a beginner", but totally missed that part with – create a curriculum structure and prerequisites.
App Finder
I sure wouldn't use AI to learn something like programming. There are enough great textbooks written with great care by people who have much experience with the subject and with teaching the subject.
minimalist phone: creating folders
@konrad_sx In that case, I think it would take way more time to research what you exactly need, or?
App Finder
@busmark_w_nika Whatever you want to program, you need to understand the basics. Once you do, it's not so difficult to find out what more specific things you need for a specific project.
I'd start learning a popular language like Python using a popular and high rated book, check e.g.
https://www.amazon.com/s?k=python+programming&i=stripbooks-intl-ship&s=review-rank
minimalist phone: creating folders
@konrad_sx Maybe I should visit a library. 👀
App Finder
@busmark_w_nika Great idea!!
But you need a good library, a small "province library" won't have good programming books...
Great question — but I think it’s less about the model and more about how you use it.
The key shift: don’t use AI as an answer machine, use it as a strict tutor.
What works better:
Set rules: “Don’t give full solutions. Only hints + questions.”
Use a loop: you explain → AI critiques → AI asks → you try again
Ask for feedback, not answers: “What’s wrong with my approach?”
Most models can do this — the difference isn’t huge.
Claude just tends to be more “helpful” (and over-explain), but the real lever is your structure.
If AI keeps giving full answers, you’re (unknowingly) still asking for them.
Curious: have you tried forcing it to ask you questions first?
minimalist phone: creating folders
@eliasweiser Okay, but there is a model situation: I do not know how to write a proper code, where to start etc, so according to what I can write the code when AI already pushes me the whole code? Because for me, it is like cooking without ingredients.
It depends on the person. If I want to learn something, my prompts or questions will reflect that. For example, if I want to know something- how it works, how I can make it myself - it will give me a detailed guide with instructions that I can follow.
Use prompts like this: "I want to learn [skill]. Act as my tutor. Break it into small concepts. Teach me one at a time. Quiz me before moving on. If I get stuck, give a hint first, not the solution."
minimalist phone: creating folders
@sanath_bhat Just I am curious how much time it takes to break it down and learn it like this.
I think proper prompt engineering and thinking about a solution before leveraging AI helps you learn while simplifying the process. Imagine you are building a website created by AI but certain features, buttons, or elements are not function. Rather than go to AI to solve the issue, it would be helpful to understand the context and learn the "Whys".
It's a give and take relationship. AI is great, but giving it instructions without learning the context or reasoning puts you in a deep technical hole. You may end up spending more time troubleshooting your prompt vs prompting for the right solution.
minimalist phone: creating folders
@calvin_lim_1 I asked it to create a visual, and according to that, I can see possible features, so I am asking closer questions about how it will work and what I expect from that feature (to work a certain way)
Great question, I don't have a better answer than already responded to. Models are improving so fast that the answer you seek will likely change just as fast as the models are developing.
Are we dumbing down our critical thinking skills by using AI to provide the answer?
minimalist phone: creating folders
@robert_vassov I would say that when we rely on AI way too much, we are losing our sense to think (or effort to think)