hallucinate for a better future in AI
It does not hallucinate – You d0!
Or: What can Agile learn from the A1 hype?

Homer, you’re hallucinating again.
Many people mention hallucination when they are discussing reliable and responsible “AI”. But does it hallucinate? No. It does not hallucinate at all – all an LLM does is producing non-sense output when it’s not well trained for your specific need. Which is perfectly OK. It does not think, it does not have a conscience nor a self-awareness, it cannot know what’s wrong, neither logically nor ethically. Statistically at most, given good data and training. It is not in the least any similar to how a mind works.
Are you trying to be non-culpable by blaming your bad choice of tools? How did we become desperate and why?
Even when you perceive the non-sense as fact, then that is your illusion, not even then it is near to hallucination.
Needle-scratch – basics first.
Yes, you do hallucinate
The brain does hallucinate, though. Constantly. Turns out hallucination is the most energy efficient way to make thought-through decisions.
See: Wikipedia’s take on hallucination
Wait what?
The brain makes very many tiny little assumptions and constructs different scenarios in parallel from those and then tries subconsciously to connect these with a personally desirable outcome to form the alleged most realistic scenario that you will consciously follow.When triggered, that is. And please, read that twice (-:
It really pains to see that this is a hardly ever used very basic agile approach when brought to scale. Parallel experimenting on things would be at the start of handling complex situations. We all knew that it is a fantastic idea; long ago. That is how you learn from nature. Unfortunately, that is hardly ever practiced. Usually, we commit to an extrinsic motivated goal that we don’t resonate with because of lacking of a vision we could believe in and excuse (aka document) accordingly after. That is the exact opposite of a good idea. It only causes perverse incentives and no innovation can arise from it.
Real knowledge is not explicit by nature. Nor by nurture. Making things explicit does not make anything real. Explicit break down makes you replaceable by LLM. And that is where AI kicks you in the teeth. Pardon my French.
What about metrics?
Yes, what about(ism) them.
We all have a choice. We could continue quoting management gurus that went “you can’t improve what you can’t measure” – as a Mantra, although we have no means of actually doing it – or instead argue that if you work on/for objectives you are going to willingly ignore opportunities and innovation. Because it creates perverse incentives. Why would I chase after low hanging opportunities when my quarterly OKR goal tells me to do as I told for my bonus? Sounds counter intuitive to me, but future history will be the judge.
There is real measurement and tools for it in real science, of course. But who cares about real science, let’s rather measure the work of others and make them accountable, why should I be? I’m in my own right.
What are you really measuring when you don’t know Jack about the crafting of produce?
Why thought-through?
Running away from a velociraptor does not relate in any way with what is mentioned above. In those cases, we follow a totally different approach. I guess most problems we are connecting to and the modern tools and working models we are dealing with are not to be found in that general area. Or I strongly suggest a rethinking of the concerned team’s purpose. You gotta draw the line somewhere.
Will AI change the way we do IT drastically?
That is what we hear. Everyone please panic – or the person selling you AI assistants will not profit from youse.
And yes, it probably already has, but…
First of all, what is IT? Everything is IT nowadays, software is eating the world, everything computer.
Software development is what they might mean, there we go again. Are you personally an expert? How good an expert? Is that something that a statistical function with knowledge from the web can do better for a non-expert?
It’s easy. You try and don’t sleep at night because you have absolutely no idea if “production” works. Is that Lackmus test enough or do you want others to sleep miserably for you?
We did not have the time to evolve into a community of quality software. It’s not like the steam engine. Let’s break it down to a few digestible steps:
- Humans don’t care about good practice on a general basis
- Humans implement bad practice
- Humans document their practice
- LLM gets trained on all available documents
- More bad practice faster.
So, the problem is not if we can do things even faster, like dogs around a track. The question is, why did we not take the time to do things right? Instead of only saying so for alleged fast advantage?
AI is just better in document processing
Machines, in general, they work better in machine work.
For decades, nay, centuries, we have been trying to normalize things only to avoid paying experts, philosophers and good old human crafting abilities. Thinking power is not efficient. In so many ways.
It is not possible, though, to replace actual thinking. The problem has not become complex, it just is. Always been.
Now, more than ever, formalities and norms are way better processed by machines, better machines, they get better by the minute. That is not a human trait. It’s a distortion if you make them do a machine’s work.
Don’t get this wrong. It is not about a certain area of work. We all know people in our field of work that make rules and or only want to play by them rules. In case of only adhering to home made rules, we are dealing with idols, gurus. It becomes a religious problem all too quickly. An idol does not accept other idols of different dialects.
So, but, only people playing by rules – which is nothing else but a very simplistic and degenerated form of document processing – are the only ones that are replaceable by LLM. Sadly, normally a guru is a decision maker and they will not replace themselves. They will turn the arguments against you and make it even more religious.
Facilitation
What about the people in between? Neither Gurus nor followers. Is there a place for facilitation?
Of course, like in every field of work, there is both, helpful things and people that only want to make profit from following the herd down to Greece. If we stop for a while and think for a minute: what is there that one can do? What team am I on? Again, small and (in)digestible steps:
Good facilitation does not try and change people. But neither should it try and change the system and care for people. Especially not if that is what the guru said.
Facilitation should create the right environment for natural interaction of mutual contradicting people (small diverse groups) to break it down themselves till understanding emerges. People need to make their own arguments, decisions and mistakes. Only then a sustainable change in the actors and the interactions between them in the system will produce a sustainable and sensible change.
It’s a long shot. But how many times do we want to get punched in the face by new technology before we address the root cause?
Do you wanna know more? Get in touch.
Never was the grass greener, but if you must
Links to similar content:
Autor
Danilo Biella, Agile & Quality Professional
Visit our social media channels
Our latest insights
This might also interest you