Introduction
There is a lot of black and white in current discussions about AI and software development. There seem to be two extremes: On the one side, you have the people who argue that there is no need for developers anymore. On the other side, you have the people denying the benefits and potentials of AI in software development.
I think this is a false dichotomy. AI is not a silver bullet and it is not going to replace software development. But one would be a fool to ignore its potential. AI can help you a lot with code generation, code review, code refactoring, code documentation and testing. I can increase your productivity and help you deliver better code faster.
However to truly benefit from this potential - from my experience - you need to consider some things:
You still need to know how code works
This might sound like a no-brainer, but knowing how to write and review code is still a very important skill. You might not have to write every line of code yourself, but if you don't understand how the generated code works, you will have a hard time maintaining and extending it.
Output from large language models is getting really good, but it is still not always correct, or necessarily the best fit for your problem. They are also likely to implement a feature in a different way than you would. If you don't understand the output, you also don't know what potential trade-offs you are making, especially when it comes to performance and security.
You need to get good at providing context and guardrails
If you find it difficult to get to the root of a problem, you will have a hard time prompting ai agents to help you. AI agents work much better if you provide with the maximum amount of context and instructions.
The more relevant information you provide, the better the result will be. If you don't know what you want, how should the ai agent know? Sure, you can ask about the pro's and cons of a certain approach, but in the end, you need to make the decision yourself. It is called "Copliot" for a reason.
If you don't have any guardrails, results might vary a lot and you might not get the result you expect. What is meant with "guardrails"? Guardrails are the instructions you give to the ai agent to help it understand what you want and what you don't want.
Some examples of guardrails could be:
- specific patterns to follow
- specific functionality to implement
- specific technology or library to use
- Linting and type checking
- existing tests to verify the output against (test driven development works really well with ai agents)
It does make sense to have those written down in a markdown file, so you can easily reference them when prompting the ai agent.
If you learn something new, turn it off!
If the tool is better at writing code than you, turn it off! This is especially true for learning and practicing new technologies. It is tempting to keep prompting ai agents if you are unfamiliar with a technology, but you will remove yourself from the equation and you will not learn anything.
And if you don't try out and break things, you will not get better. And you will never be able to truly judge the ai output, which might lead you down a dangerous path of dependency.
Conclusion
So what is my take on this? Always stay curious, but be skeptical. AI is a powerful tool, but it is not a silver bullet. It is still a tool that you need to use correctly. You can benefit tremendously from it, but be honest and don't fool yourself.
Sometimes the best thing you can do is to turn it off and enjoy coding like it is 2021 again.