With the use of LLMs becoming more mainstream in everyday business functions and processes, we are living through a pivotal shift in history. While we are still only at the very beginning, lessons are quickly being learned.
I’m Michael, a software engineer at Verax. The world of programming, coding and building software has been a large part of my life since my teen years and as LLMs and Gen AI have really gathered pace recently, I’m sharing my thoughts on my journey with it so far.
A quick timeline of my working life
So, to start, let’s put some things into perspective. I started programming at the age of 14, a time when receiving my first floppy disk was a momentous occasion. And it was during my university years that the internet had launched and the exploration with the world wide web began.
At the age of 19, while still studying, I started writing code for my first employer and since then I’ve stayed within this field. My experience involves various sized companies from smaller scale ups to huge corporates, like Intel and Amazon.
Today at Verax, my main focus as a software engineer is working with LLM behaviour as well as using different approaches in unsupervised learning.
The changing assistants to developers
During my time, I’ve experienced a vast change in how us software engineers learn and source help in our projects and work.
Of course, the first came from read-only media, like the CD or DVD which while helpful was incredibly limiting; a compact disk can only hold so much information. Then came peer-to-peer sites that were a little clumsy but started to open up the world of sharing experiences or new workarounds.
Finally, as many would know, StackOverflow became the leader in software development. It took some time to get used to but soon enough, working without it became unimaginable.
But as with anything, alternative sources were still important to utilize because we can’t get everything from one place. StackOverflow alternatives like Boto sites for AWS, RedHat subscription sites and Doran Holan’s blog provided and still provide strong and consistent help that’s effective, extremely user-friendly and specific to the need.
My changing approach to Gen AI
When the Generative AI revolution made its big entrance a couple of years ago, I saw many of my younger peers playing around with it. For me though, I was happy to keep to my usual, well tested methods, system of notes and experience to do my job.
While I happily accepted a nice IDE addition for context-sensitive autocompletion of code, for the most part, I was content with my top-down style and good coding/debugging time ratio.
However, this all changed when I became the first employee at Verax. I was challenged with bold deadlines to create a working demo, to investigate LLMs and to use them for specific business logic. It made much more sense then, for me to lean on the LLM to offload some of my daily tasks.
Now, after developing my own prompt style, I’m using it every day. A piece of code that would usually take around 20 minutes to create and an additional 5 minutes to test and debug, now just requires 1 minute to create the prompt and around 2-3 minutes to test the code that’s generated. It’s a no brainer.
The importance of embracing the revolution
I believe it’s important and beneficial for the older engineers amongst us to embrace the shift because despite having strong problem-solving skills and all the experience behind us, we’re still susceptible to the usual aging symptoms like weaker memory and less capacity for longer working hours.
I’ve helped other software engineer veterans to start using AI, too. For example, a friend who was working on low level Linux tasks was sceptical about incorporating AI into his process. But after showing him a few simple tricks, he saw how LLMs can help with these types of noble jobs – and immediately, too. There’s no upskilling or onboarding time involved.
Additionally, and maybe something that will resonate more, is that it’s simply quite boring to write repetitive pieces of code, like parsing, or looking through libraries for them. This is all solved with AI.
Ultimately, I now view LLMs as a friendly and knowledgeable colleague. From my days in the office, I know the importance of having others nearby to ask advice or to bounce ideas off. Without this dynamic available, you’re often stuck for much longer. So, for the software engineer of today, an LLM can be that someone even if there are some inevitable limitations like asking for a new direction or common infrastructure.
The way I work with LLMs
Sticking to smaller tasks
Similar to using StackOverflow, at this point, there is a limit to how much can be achieved with prompts to LLMs. For me, I see these as:
- Bigger tasks, like writing a class Foo, will inevitably bring along aesthetic coding variations. Because this isn’t in my style so to see core coding in this way is quite distracting.
- Quick tests on big chunks are harder to perform.
- A higher probability of hallucinations or mistakes in the code is produced.
For this reason, I very rarely ask the LLM for large pieces of work at once but instead, produce them piece by piece.
For example:
Prompt:
in Typescript, I want to convert a standard Request body.val which can be string, boolean, integer, float - to string, using JSON.stringify, how to do it safely?
Response:
function valueToString(val: string | number | boolean): string {
// If already a string, return as is
if (typeof val === 'string') return val;
// For other types, use JSON.stringify
try {
return JSON.stringify(val);
} catch (e) {
// Fallback in case JSON.stringify fails
return String(val);
}
}
// Usage examples:
const str = valueToString("hello"); // "hello"
const num = valueToString(123); // "123"
const float = valueToString(123.45); // "123.45"
const bool = valueToString(true); // "true"
Like a dialog with a human peer, context is key for strong, relevant results. So with Claude, I now often stick to the same subsystem I’m developing for at least a week. This helps with:
- Shorter prompts as Claude ‘remembers’ what we were ‘talking’ about.
- Feedback and fixes.
- Same naming convention and style.
- A big piece of code being built.
Remarkably too, by staying within context, the LLM can find the root cause of an error you’ve identified but don’t understand. For example:
Prompt:
when I execute it regularly it works, but when I run it with sudo - it tells me ./add_global_section.bash: 4: [[: not found
Response:
Add the #!/bin/bash shebang at the top of your script to ensure it is always run with Bash:
By the same token, I prefer to ask Claude unrelated questions in a separate chat to avoid confusion. Sometimes though, questions from an old topic can come up after starting a new chat, which can take us into a different direction. So when I need to get back on track, I enter a prompt such as, “do you remember that we’ve talked about the state machine?” and this usually helps.
A few tips other I learned along the way
- Provide feedback: this helps build out the context for your project, and it provides additional learning for the LLM. Prompt: ‘Thanks! I went for the first approach and it worked by…”
- Even in small tasks, I can often find around 20% chance that the response isn’t what I meant. In these instances, I point out the discrepancy to the LLM and the next draft is often correct.
- Google is still a good tool for basic questions needing a concise response.
- LLMs are helpful to compare different approaches or to write a bash script that behaves in a certain way. But they can be overqualified for the simple questions.
- Each LLM has its own specialty. For example, I’ve found Claude is great for Python coding, while ChatGPT is best for further learning.
My final thoughts, for now
GenAI’s role for humanity as a whole and for software engineers is still wildly unpredictable. We’ve already seen some limitations or significant implications and repercussions like copyright issues, art redefinition and fake identities. This list goes on and for now, will continue to.
The transition to using them is similar to the initial shift to StackOverflow - a little daunting and a lot of unknowns to adapt to. However, from an engineering point of view, my key takeaway is that LLMs are great and I’d encourage everyone to use them but to use them wisely.
They still can’t replace the tried and tested design process in full, so don’t blindly copy and paste results into production code as there will be a bug rate of between 10-15%. This reduces any efficiency you hoped to gain.
Also, know that LLMs aren’t a catch all for everything you need to know. Continue to utilize the other great resources available that are built specifically to help you for niche requirements.
So, while it’s too soon to offload whole or large pieces of work to an LLM, know that they do serve a purpose and provide a great deal of help to your everyday tasks. It’s time to start exploring them more.