LLM tools can bring great leaps in efficiency. But at what cost?

We all know about, and probably use, Large Language Models or LLMs. More commonly referred to as "AI" by the general public, they’ve been the hottest craze for a while now. Everyone, everywhere, in every sector is trying to implement them somewhere, somehow. We literally have "AI" toothbrushes now... If you're a dev, sysadmin, or DevOps engineer, you've very likely been implementing an LLM toolkit in your workflow in some capacity. In fact, many organizations are encouraging their use and are actively implementing them throughout their development life cycle. It's hard to argue with the efficiency gains brought about by these tools. In seconds, you can have entire methods written for you. No need to wade into basic logic and structure anymore. But like everything in this world, there is an equal and opposite reaction whenever change is involved.

Like many in the tech world, I've had a slew of philosophical thoughts stream through my head regarding the new deep learning frontier. One particular concern that continues to resurface is how this technology can atrophy our minds. Preliminary studies showing this very effect are already beginning to surface. Anecdotal evidence can be found all over Reddit as well. Many have reported struggling to draft basic emails without an LLM assistant. From a macro standpoint, I worry that the corporate world will heavily devalue human skill and knowledge in its insatiable thirst for ever-greater efficiency gains. Many junior admins coming into the field will ask themselves: "What is the point of learning something like low-level log analysis when that skill doesn't pay anymore? Nobody writes their own scripts anymore."

Our access to electricity, the internet, and the sea of resources they make available suffers heavily from normalcy bias. We've done such an excellent job of increasing uptimes that everyone expects those resources to always be at the ready. Keeping that in mind, let's look at the role of the system administrator for a moment. They're the guys and gals who keep your infrastructure running. When disaster strikes, they're the emergency responders who pinpoint problems, perform rollbacks, and reconfigure services to get you back up and running. A disaster could be anything from a buggy Ansible playbook to a major environmental event. If your admin team is reliant on a prompt to spoon-feed them configuration steps and guide them when using low-level utilities, what happens when the disaster has removed access to an LLM? Do you want the fate of your organization resting in the hands of a skilled professional who has a fundamental understanding of the systems they’ve been tasked with maintaining? Or somebody who's super efficient with ChatGPT but can't configure a network connection profile or navigate rsyslog without it?

The widespread dive into LLMs has also led to many treating output like gospel. A few weeks ago, I gained firsthand experience with the risk this can bring. While performing a final security audit on a cluster of new production servers for a client, I discovered an instance with 5432 exposed publicly. I decided to dig further and determine who opened the port and when. Well... turns out it was me. I had used ChatGPT to quickly give me a bash script to install and configure the requisite services in a pinch. Curious, I went back to check my original prompt, thinking I had likely made an error. To my surprise, ChatGPT had ignored the firewall zone specifications. I can only surmise that since I was installing Postgres, it "assumed" (predicted) it should also open 5432 in all zones.

As a system administrator, I feel that I have a duty to maintain a working knowledge of the systems I'm charged with deploying and administering. I should be able to manage repos, set up network connection profiles, and pinpoint issues in system logs without relying on a prompt. However, I also recognize the need to embrace change. For me, the question becomes: what is the right balance of incorporating LLM tooling into my workflow while maintaining direct human oversight over the systems I manage? Admittedly, I'm still figuring that out. Presently, I try to limit LLM use to instances of redundant, time-consuming tasks — specifically functions where the consequences of incorrect output are negligible. Need to declare a bunch of variables? Have the LLM do it. Need to write complex iptables rules? I should be doing that myself. I also resist the temptation to have an LLM write shorter bash scripts for me. I feel this keeps my mind practiced in thinking critically as an admin. I want to be able to jump in and extract relevant information without having to ask what flags I need.

Overall, I think deep learning has the potential to benefit mankind greatly if we can learn to use the technology sensibly. LLMs have helped me grasp concepts and learn new skills in timeframes I never thought possible. I've more or less used them as personalized instructors with great success. I feel the key balance will be remembering when to seek objective resources or directly consult documentation. More importantly, I hope that companies will realize the importance of continuing to place a premium on human skill and knowledge.

Previous Post Next Post