Stop Using Cursor as a Completer: Skills are the Key
Last night, I watched a friend struggle with Cursor for nearly forty minutes while trying to modify a project.
He wasn’t incapable of writing prompts. The issue was more complicated. Every time he started a new session, he had to explain the project structure, tech stack, naming conventions, and interface boundaries, plus add a note saying, “Don’t touch this directory; it’s in production.” By the time he finished, the AI was just warming up. When it finally began to write, it often went off track, either altering files it shouldn’t or generating code that, while functional, wasn’t suitable for the team.
I’m all too familiar with this scenario.
From 2024 to 2025, while discussing AI programming tools with several teams, the common complaint was not about the AI’s inability to generate code, but rather how difficult it was to manage the output after generation. You can prompt it to write, but if you expect it to consistently produce the same style for a week, problems start to arise.
Many people think their issues with Cursor stem from not crafting long or precise prompts, or from using a weak model. This is often not the primary issue. More commonly, they treat something that should be established as a “fixed context” as temporary chat content, re-entering it each time.
In simple terms, whether Cursor evolves from a “high-level completer” to a “reliable co-pilot” often depends not on the model but on the skills.
The term skill can be replaced with rules, playbooks, or project workflow templates. The name isn’t important. Essentially, it answers four key questions in advance: Who are you? What is this project? What absolutely cannot be done? What order should tasks be handled in when encountering certain types of tasks?
In one sentence:
Skills don’t make AI smarter; they prevent it from taking the detours you’ve already navigated.
Why Many Users Feel More Exhausted with Cursor
The most common misconception I’ve seen is treating Cursor as a powerful intern who is always available but never providing it with an onboarding manual.
The result is that, despite working in the same repository with similar requirements, you have to redo three things each time.
First, re-explain the background. Is this repository a monolith or microservices? Is the frontend in apps/web or src/client? Should tests use Jest or Vitest? Does the API response need to wrap in a data layer? Without a fixed entry point, the AI can only guess, and when it guesses, the style goes off. To put it bluntly, it becomes ridiculous.
Second, re-explain the standards. For example, “don’t write overly long functions,” “don’t casually introduce new dependencies,” “tests must be added after modifications,” and “the interface layer should uniformly go through service, not directly connect fetch in the page.” If you don’t specify, it won’t consistently adhere. If you say it today, it forgets tomorrow when you start a new conversation. This can be very frustrating.
Third, re-explain the process. Many people start with, “Help me fix this bug.” The problem is that a reliable process shouldn’t be a direct fix. It should involve reading the error, identifying the impact scope, explaining the solution, modifying the code, and listing verification steps at the end. Without this process, the AI will use the easiest way to complete the task, which is often not what you want.
The most annoying part isn’t just fixing mistakes.
It’s that you slowly develop the illusion that this tool seems great sometimes and particularly dumb at others. In reality, it’s not that it’s suddenly smart or confused; it’s more likely that the quality of context you provide varies each time. Unstable context leads to unstable outputs. You and it end up going in circles, making you increasingly exhausted.
This is the first layer of the problem that skills aim to solve: turning high-frequency, repetitive, easily overlooked background information into long-term reusable default premises.
What Skills Actually Supplement: Work Methods, Not Prompts
I increasingly dislike defining skills as “a more advanced prompt.” This understanding is somewhat superficial.
A truly useful skill should encompass at least four layers of information.
One layer is the role. What do you want Cursor to play at this moment? Is it a cautious reviewer, a researcher before taking action, or a bug fixer making minimal changes? Different roles yield entirely different outputs.
Another layer is the project context. Repository structure, core modules, dependency constraints, directories that must not be touched, existing scripts, and team-preferred commands. The more specific, the better. Avoid vague statements like “please adhere to best practices”; they are useless. Instead, write things like “prioritize searching with rg,” “read README.md and CONTRIBUTING.md before modifying,” “do not upgrade dependencies without explicit request,” and “do not modify lockfile unless I explicitly ask.”
Another layer is the execution checklist. For certain types of tasks, what should be done first, what should be done next, when must one stop to ask someone, and when can one continue independently? This is particularly valuable because most negative feedback arises not from coding ability but from the order of execution.
The final layer is the output format. For example, you might require it to first give conclusions, then changes, and finally verification commands; or to list risks before proceeding. These format constraints may seem trivial, but they directly affect collaboration costs. Many reworks aren’t due to coding errors but rather unreliable reporting methods.
You see, skills fundamentally manage not “expression” but “method.”
The same request to “fix this bug” feels like improvisation without skills; with skills, it feels like entering a well-structured editorial department with SOPs.
I even suggest writing down the most useful trivialities. For example:
Before handling tasks, determine if additional context is needed.
If more than three files are involved, provide a modification plan before proceeding.
If a user has uncommitted changes, do not overwrite; ensure compatibility first.
If tests fail, clearly state where the issue lies; do not pretend it’s completed.
These statements aren’t sophisticated.
But they are lifesavers.
What to Prioritize: Three Types of Skills
Many people jump straight into building a comprehensive skill system, resulting in a document museum. The directory looks impressive, but no one refers to it, and the AI isn’t consistently utilizing it.
Don’t go that big.
Start with just three types.
1. Project Onboarding Skill
This skill addresses the issue of “having to reintroduce the project every time.”
The content can be quite simple: project structure, key directories, tech stack, common commands, coding style, restricted areas, and validation methods. Keep it between 300 to 600 words, plus a few critical file paths. It doesn’t need to cover everything; it just needs to prevent the AI from going off track at the start.
For example, you can specify:
- Read README.md first
- Prioritize searching with rg
- Follow existing hooks style when modifying React code
- Check api and service layers before modifying interfaces
- Don’t claim “already validated” without running tests
Once these constraints are established, you’ll noticeably save time in the first ten minutes of conversation.
2. High-Frequency Task Skill
Extract the most common tasks into templates.
For instance, “fixing online bugs,” “writing management backend forms,” “conducting API integration,” “adding unit tests,” and “performing PR reviews.” The judgment order for each task differs. Fixing bugs should involve reproducing the issue before making changes; reviews should prioritize identifying risks before discussing merits; and adding tests should confirm current behavior before writing assertions.
Don’t hesitate to write in a straightforward manner. The more it resembles the operation manual left by the most reliable colleague in the team, the better. No, it should be said that the less it resembles “official tutorials,” the more likely it is to survive in the team.
I personally value review skills highly because they yield immediate results. Without skills, AI often writes reviews like “overall good, suggest optimizing readability.” Such comments are as good as unread. With rules, you can force it to prioritize reporting bugs, performance risks, behavioral regressions, and missed tests before deciding whether to summarize.
3. Boundary Constraint Skill
This skill specifically addresses “don’t mess around.” Many incidents start from “just a quick fix.”
Which directories are prohibited from modification, which commands cannot be executed directly, under what circumstances manual confirmation is needed, when to proceed conservatively, and when to take initiative. Many incidents occur not because AI can’t write code but because it’s too eager to help. Once it gets enthusiastic, it starts casually refactoring, upgrading, or cleaning up. Casualness often leads to disaster. When you look back at git diff, it can be quite overwhelming.
Therefore, boundaries must be clearly defined.
Can files be deleted? Can schemas be modified? Can dependencies be updated? What to do when encountering a dirty workspace? When there’s a conflict between requirements and the current state, should one continue guessing or stop first? If you don’t specify, the AI will handle it according to its default preferences, which are often more aggressive than yours.
The Effective Process for Using Skills
If you want to start today, I recommend not spending too long on theory but rather following this order.
First, choose a task you will perform at least twice a week. Low-frequency tasks aren’t worth abstracting into skills.
Then, copy the phrases you’ve repeatedly added to Cursor in the past three attempts verbatim. Note, verbatim. Don’t beautify them. The sentences you have to say each time are the best raw materials for skills.
Next, divide them into three sections: background, process, and constraints. Background answers “what is this?” Process answers “how to do it?” Constraints answer “what should not be done?” At this point, a usable skill is basically formed.
Take another step forward.
Add two examples: one good example and one bad example. The good example tells the AI what meets expectations; the bad example shows which actions seem proactive but actually complicate matters. Adding just one example can significantly enhance stability. Even a 30% improvement in stability can save you a lot of back-and-forth communication in a week.
Another detail many people overlook: skills aren’t finished once written; they should evolve with the project.
Each time you notice Cursor making a repeated mistake, don’t just correct it in that conversation. Incorporate that correction back into the skill. Each time you find a particular output format significantly reduces communication, don’t just remember it; write it down. This way, it will increasingly resemble a member of your team rather than a temporary contractor.
Here’s a practical judgment standard: if a skill doesn’t reduce your background input by half, or if it doesn’t cut down two rounds of direction changes, it’s likely too vague. Delete it and start over. Don’t be sentimental.
Skills aren’t collectibles.
They should function like a wrench, ready for use.
So stop asking “how to make Cursor smarter.” Change the question. No, make it a tougher question.
Have you seriously handed over your work methods to it?
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.