The State of Software Development in 2026

It’s been a while since I posted anything here and – to be honest, I did not really have anything interesting to share. In the past I’ve written about Rust adoption a lot and that is still an issue dear to my heart, however, with the industry trends in the past year I feel Rust adoption per se is not as important as I would like it to be (…or maybe even more so, but more on this later). Few people could’ve missed the dominating tech issue of the last couple of years, that is AI. I’ve talked about AI a bit in the past where I was impressed with the state of AI assistants, but not to the extend that I felt this tech was much more than an improved code completion. This was 2023 though. The tech has come a long way since then, so time for my thoughts on the impact of AI.

Looking at the past 6 months alone the speed of improvements has been gigantic. Nowadays we are in a position where hand-crafting code is starting to seem like an artifact of yesteryear, even though a lot of the developers have not noticed yet. Modern agentic tools have advanced to a degree that we can accelerate development for “simpler” tasks by – at the very least – a factor of 5 – 7 (judging my own experience, probably even more). Now, you might be tempted to call me a snake oil sales man, and I’ve been skeptical of AI tools for a long time but the recent developments that came with Opus 4.6 convinced me, that the age of manual code writing is coming to an end and fast. In the time since 4.6 was released I spent a lot of my free time working with it to figure out what the thing is capable of and I’m still torn between “this is utterly awesome” and “OMG how am I going to earn my money in the future”.

So, back last year I was using Opus 4.5 for the most past and the feeling was always a bit “meh”. Yeah, it produced kind of working code, but getting there was a lot of effort and you’d constantly have to babysit the thing, even if it got a fairly detailed prompt. With 4.6 this is no longer the case. You can hand it complex tasks, that need to touch dozens of files and it will create a well structured plan to operate on and you can be sure that most of the time it will get the feature you told it to implement right the first time (at least to the degree you specified it), and if not taking another two rounds of explanations will usually get you there. The improvement of the output between 4.5 and 4.6 took it from “Junior fresh from College” to “Developer with 2 – 3 years of experience”, at least for the tasks I used it for.

My examples

zeitly

The company I work at is an engineering shop. We usually work on a project base and bill per hour. At lot of the time we get fixed budgets allocated from our customers and send them an invoice containing the hours worked at the end of the month. Doing this at some scale requires a robust time tracking system. And yes, there’s literally tons of them out there, however: We’ve struggled finding one, that actually suits us, because most of the systems tend to either be pure workforce management or project management tools with time tracking tacked on (and boy can you feel that). We wanted an integrated solution that added some compliance stuff, overtime tracking, some budgeting and the possibility to do monthly reviews, before the invoices are generated. The tool that came closest to what we needed was an open source tool called Kimai. At the time it provided about 80% of the features we wanted. Since it was open source and we had some special requirements one of my business partners came up with, we decided to fork and add the missing 20% ourselves. We’ve been using this solution for the past four years and – from a usability perspective – it is awesome. We’ve had troubles with some conceptual shortcomings of Kimai that have led to issues for us – not the fault of the software, but the way we want to use it. Fast forward to end of 2025 and we’re starting to think about how we could create a tighter integration of our business processes. Kimai is great, but its output only feeds into a bloody mess of excel files and scripts (yeah, in our heart we’re still a 10 people shop). This leads to a lot of duplicate data (e.g. budgets are stored in Kimai to ensure no overruns, but also in excel files to do some controlling work), which needs to be synced, often with manual steps – an error prone and lengthy process, which was fine, when all you had to do was blast out three invoices a month, but today with a company pushing 40 developers this approach just does not cut it anymore and we have one person struggling to get out our invoices on time come month’s end. The obvious solution we discussed last year was weather or not we should scale up to use an ERP system. However these systems tend to be large, costly to introduce and, more importantly: If we were going to go that way I’d like to have an integrated solution that would use the same data store for invoices, projects, hours, quotes etc. Basically an ERP, but with robust time tracking for our purposes. Since I had some free time on my hands on the weekends I told myself: Let’s see how far we get in a usable amount of time. So I wrote down a three page list of requirements and set Claude to work on creating an ERP-Not-So-ERP-Time-Tracking-App. I was fairly sure, that Claude would perform well, since there’s arguably more web code available for training than anything else. It did not disappoint. This has been a somewhat longer effort, spanning two weeks of intensive weekend sessions and some weekday sessions. The end result is pretty impressive though. Not only did I manage to to “vibecode” a product that outfeatures Kimai in every aspect (regarding time tracking), but it also adds nearly all important workflows regarded to managing quotes and invoices for our customers. Apart from the features, the produced quality is at least “okay”, with a high coverage of tests (just shy of 1000) that the agent uses to ensure it does not break stuff and a polished documentation. I myself did not write a single line of code, nor did I look at much code, but now I have an ERP-Light system, that covers all use-cases the company needs (even though I designed them to be general purpose) and should also be valuable for others. I don’t know how to proceed with the code yet, but: It turned out that modern AI agents are not limited to super small scale projects. What this project did was – essentially – cut the work of at least one year into two weeks… part time. I’ll freely admit that I had a fairly good idea of what I wanted right from the beginning and since there was only a single stakeholder (me), there was very little in the way of iteration necessary. Coming from our Kimai solution I knew what I liked about Kimai (and thus would keep) and what I didn’t like and had to change.

IO-Link

Being an embedded guy I regularly need to work with fieldbuses, one of which is IO-Link. The interesting bit about IO-L is, that it has a completely open specification and that it is designed for smaller devices, leading to a comparatively simple protocol (albeit one with nasty timing requirements to follow). The spec comes at around 300 pages. One of my more recent experiments was to task Claude to implement IO-Link in Rust for bare-metal systems and only handed it the text form of the specification. This experiment is a lot more interesting than you might think. For one there are no open source IO-L device stacks (there’s an open source IO-L master, which needs a commercial license for commercial use, but IO-L masters behave quite different from IO-L devices) and for sure there’s no such stack available in Rust, with Rust still being relatively niche. So: This is kind of the worst case for an AI model: Its confronted with a case that sure-as-hell has not come up in the training data. Apart from that IO-L has notoriously finicky timing requirements which make integrating existing commercial stacks, that have man years of development experience behind them a real pain. With all that said Claude knocked it out of the park. The produced implementation did take a couple of rounds, with it asking me questions about how it should handle certain things, but we got to a point where it had implemented a unit tested version of the protocol within about three hours. But hey, this is the easy part right? That’s just code. No real hardware involved, so not hard to do. Right, that’s what I thought as well and I did not expect much when I brought out an IO-L master and wired it up with a combination of an STM32H5 / TIOX EVM eval boads to see how it would fare. The prompt it got from me was literally: “Create an example application using the stack, that runs on an STM32H563 Nucleo wired to a TI TIOX EVM board.” That is all it got. What I got in return was an Embassy application running on my target happily using the IO-L phy… but not yet communicating. But remember: This was actually the first shot for the example, which took all of 20 minutes to produce. I knew the interesting bit would be what came next: Getting the thing to talk to the IO-L master so the master was actually happy. I wasn’t in this to get it to run at all cost, so I gave myself 4 hours with the intention (and expectation!) to let it rest afterwards. What happened in reality was fascinating. The agent would strategically put RTT messages in the code, have me flash feed the output back to the agent. It would then proceed to make more adjustments to figure out what was going on. Along the way it found a bug it had produced in the checksumming code. We wen’t back and forth for about an hour, with me giving it some advice on how to proceed. At some point I thought “screw it” and told it to use probe-rs to flash the binary itself and figure out what was going on. I left it to work for about half an hour and when I looked back at the agent’s console there it was: “Everything works fine, the device goes through the startup statemachine and we’re receiving periodic data from the master.” And sure enough, looking at the RTT output of the device I saw just that. Not only had it managed to write an application that was able to hit the nasty timing requirements, no it also managed to implement the protocol and the higher levels to a degree that it was able to proceed to the whole (involved!) startup sequence, a task that took one of my developers days to figure out the last time we had to integrate IO-Link into a device. The AI did it in 30 minutes. My personal involvement in this experiment: Less than one man day. For me, this experiment kind of put the nail into the coffin for the artisan software development we’ve been doing for the past fifty or so years.

My 2 Cents on all of this…

What we’re seeing is democratization of software development. Going forward even projects that have little money to spend can create meaningful value for their creators. For me, who has been a software architect for the best part of the last ten years, this shift has been super empowering as it allows me to basically have my ideas implemented nearly as fast as I can type them out. But, and that is a big but: It spells trouble for the industry at large. We’re witnessing a tectonic shift whose magnitude we can’t estimate yet. What is likely is, that shops who’ve traditionally relied on engineering talent to fill their ranks suddenly have a lot of internal developers, becoming so productive they don’t know how to keep them busy, which will, in the next years, lead to a massive reduction in headcount for these companies. We can also expect the craft of writing software to become a lot less in demand. Just being able to pump out code for a spec is not going to cut it anymore. Domain knowledge is king – and this is where the real bombshell lies, IMO: In the last twenty years the industry has created a large number of engineering shops that hire out software engineers. Companies would swallow the cost of these engineers not knowing the domain – they’d grudgingly train the people but expect them to become productive fast. Now, the fact that these developers don’t know the domain will count against them in a huge way, when the pure development skill has been devalued by AI to the extend we’re about to see. I expect we’ll see a lot of engineering companies close shop or downsize dramatically, because they won’t be able to hire out their people anymore. The alternative to this is to create something these companies own – the product can no longer be “hours billed” as a proxy “lines of code written”. Companies need products that scale and can be sold independently from worked hours.

All doom and gloom? Interestingly no: I expect that we’ll see a lot more software being written, but the scale of the software projects (in man-days or billable hours or whatever suits you) is going to dramatically shrink. Today, if you take 50k€ of budget, that will buy you about 3 – 4 man months in Germany (if you find a cheap place). With artisan work you will most likely not get very far in that time, even if you hire a very experienced engineer on a project the person knows the domain of. In the “brave new world” this budget is still only going to get 3 – 4 months (maybe a little more, since prices will probably drop a bit), but what you can expect to get out of that time is huge – see the examples above. The interesting takeaway is, that this kind of evens out the field between so called “best cost countries” and high wage countries like Germany. When the hourly rate is not multiplied by 2000 for a project but by 300 the gap between the best cost country and a high wage country becomes a lot smaller and the softer aspects (closeness, cultural fit, language) will start to carry more weight again, which will open the doors for German engineers. Small and mid-sized engineering shops who hope to keep doing what they’ve been doing for the past 20 years are going to have a hard time though.

Photo by Daniil Komov / Unsplash

Leave a Reply

Your email address will not be published. Required fields are marked *