Exercitation ullamco laboris nis aliquip sed conseqrure dolorn repreh deris ptate velit ecepteur duis.
Exercitation ullamco laboris nis aliquip sed conseqrure dolorn repreh deris ptate velit ecepteur duis.
Error: Contact form not found.
Technology
I grew up with computers that did what they were told. Remember those? Admittedly, to get those computers to do what you wanted, you had to speak to them in a very specific way, using very specific magical incantations that ordinary people (you might call them “muggles” today) could not understand.
Back then, we called it “coding”. Today, people call it “prompting”.
Different words, but do similar problems remain?
With a traditional computer program, the computer follows a set of instructions. It follows them “to the letter”, even when the outcome may not be what you want. We are familiar with “computer errors” and “bugs”, but we’re also familiar with having human ownership over these issues. People write code; people make mistakes. People find and fix those mistakes. Overall, over time, software should become more stable, more useful, and more trustworthy.
Now, computing is entering a new era and a new tool set. Whether you choose to call it “vibe coding” or “agentic development”, the principle is the same: you no longer write code but instructions that are then interpreted by an artificial intelligence large language model that produces the code for you.
In theory, an empowering utility that eases workloads for over-stretched software developers and puts the power to “just make things” in the hands of non-IT people. (We still call you muggles, though).
In reality… maybe not quite so great.
Problems abound with vibe coding and agentic systems. They aren’t very good at security. They aren’t very good at scalability. They’ve yet to produce a single billion-dollar, or even million-dollar app. Anthropic claims that 80-100% of its code is being written by Claude. Their systems fall over a lot as well. Microsoft and Amazon claim some big numbers as well… and have the big software outages to prove it. (There’s a rumour on X right now that Amazon are having internal meetings about the “blast radius” of “Gen-AI assisted changes” and may be blocking any such changes without a manual review by a senior engineer going forward.)
Recent research released by Ali Baba showed that whilst generative AI models were good at solving small, previously seen problems, they failed when it came to maintaining changes in larger, established apps.
Let’s face it, if vibe coding and agents were all they were cracked up to be, someone would have fixed Microsoft Teams by now, right?
But that doesn’t mean that generative AI technology is useless. Far from it. You just need to know where you can deploy it, how you can use it, and where you shouldn’t.
The problem is, we are used to computers giving us perfect answers, and we are used to complaining when they don’t. Excel doesn’t make mistakes when it adds up numbers, but an AI can and will. Why does it cost billions of dollars to make something that’s dumber than your calculator?
Well, it’s because adding up numbers is a problem with a discrete answer.
Generative AI doesn’t really do that. Generative AI comes up with answers that are plausible and probable… but they might not be right.
This is the power and the problem with AI.
There’s no perfect answer to “What is the summary of this article?” There are a lot of very plausible and probable ones, though. That’s powerful for overstretched executives who would love to have an assistant reading reports and emails for them, even if that assistant makes mistakes sometimes. It’s also an answer where you should be in a good position to judge when the answer is plausible and when it isn’t.
Summary doesn’t sound right? Check the original. Task list from a meeting doesn’t match what you noted in your pad? Go back and read the transcript for yourself. It might sound weird, but the best answer, right now, to dealing with vibes is… “more vibes”. When the answer from AI doesn’t feel right, there’s a good chance it isn’t.
But what happens when you don’t know much about the topic? What happens if the answer really matters and you don’t have the expertise to judge if the answer you are being given is right? We can live with ‘Maybe’ when it comes to summarising an email or drafting a birthday card. We can probably even live with it for a ‘vibe-coded’ personal website page. But ‘Maybe’ is a terrifying word when it’s applied to a banking ledger, legal documents, health care, a flight control system, or an IT security patch.
This may go some way to explaining why, although “Big Tech” has had some notably large layoffs attributed to AI, the number of vacancies for software developers is at a high. The magic hasn’t gone away; the wand has just changed shape. For decades, we kept the pesky muggles out with syntax and semicolons. Now, the gates are open, and everyone can cast a spell.
But here’s the thing about magic… the more powerful the spell, the more dangerous the backfire. We’re entering a world built on ‘Maybe’, and in that world, the most important skill won’t be talking to the machine. It will be known when the machine is lying to you.
Human oversight and compliance are key to our approach to AI at The Source.
As we start to deploy AI tools to augment the capabilities of our team, to help us better interpret data, interact with third party systems, and communicate with customers, we’re also deploying human and digital safeguards to make sure that the decisions and recommendations being generated by AI meet our high standards of accuracy, compliance, and delivering great customer outcomes”
Catch the latest article and news on our LinkedIn – https://www.linkedin.com/company/source-insurance-ltd/
Looking to boost your CPD time? Visit LearningLab.
