Despite the old adage of the tortoise and the hare, when it comes to digital transformations, slow and steady does not win the race, according to Erik Schmiegelow, CEO of Hivemind Technologies, a Berlin-based software engineering business. Hivemind’s approach focuses on fast software delivery via small, incremental releases to validate assumptions and derisk the process. During a discussion with McKinsey partner Thomas Delaet, Schmiegelow explains Hivemind’s approach to speed as a key criterion in transformations to reduce risk, the pros and cons of public cloud, and the impact of generative AI (gen AI) on developer productivity. What follows are edited highlights from that conversation.
Delivering value via speed
Thomas Delaet: Why do you see speed as the main value driver for transformations?
Erik Schmiegelow: Speed is of the essence, especially when you’re operating in a very complex environment. On the flip side, if you’re looking at a greenfield project, it’s very simple. You have a clean slate and can basically design your system with little risk or interference from other departments or systems.
That’s not what modern IT looks like. You very rarely have greenfield projects and, in most cases, need to collect as much feedback as possible. You need to expose integration break points and problems as quickly as possible, because that alone allows you to plan and adapt to changes, which conversely is the reason why speed is of the essence.
By establishing a continuous flow of small, incremental releases to users, you can validate every assumption. You can expose problems and deal with them immediately, rather than trying to plan ahead as much as possible for things you cannot verify. That’s why speed is a very crucial tool to derisk software delivery projects.
You often have these big public-sector projects that take years to accomplish, and when the software’s finally released to users, that’s when the problems start popping up. And the problem with that approach is that you waste a lot of time anticipating problems and planning for features that you never verify.
Putting security at the beginning
Thomas Delaet: I imagine some companies aren’t convinced by this speed argument because of concerns around the increased risk. A second counterargument involves the problems inherent with a number of parties within a broader ecosystem integrating a continuous stream of software. So from both a risk and integration point of view, how do you make the speed argument?
Erik Schmiegelow: Let me start with the security argument, because that’s an important one, especially in regulated industries. The traditional approach to security is essentially to do large, one-time audits of systems with penetration testing, security assessments, code reviews, and so on. That’s something that usually works quite well in monolithic legacy systems with very few releases.
However, that’s not the world we live in anymore. Generally—and especially in regulated environments—system landscapes are a combination of different components that work together, which means traditional single-point and end-of-chain audit processes don’t cover all potential problems.
You need to integrate security considerations and security aspects into the fast-delivery process in an approach we call SecDevOps (security, development, operations). We specifically put the “Sec” in front of “DevOps,” because this reinforces the importance of security by design. It’s an integral part of the fast-delivery process, in which each step of the delivery has its own security checks, security audits, and design considerations baked into the process and the architecture of the application. We rely heavily on continuous testing and automation to ensure security is tightly coupled to DevOps, because the sheer amount of possible vulnerabilities greatly exceeds what can possibly be covered in a single, traditional audit.
This approach greatly improves application security and reduces the attack angles and the risk of delays or delivery stoppages, even when companies choose to do a final audit at the end of the delivery. As a security professional, by embracing the SecDevOps model, you also gain a much better understanding of the application and how it works.
SecDevOps gives a competitive edge. Not only can teams recognize vulnerabilities faster thanks to automation and testing, but they also can deliver the fixes more quickly, too.
Rewired and running ahead: Digital and AI leaders are leaving the rest behind
Overcoming organization issues for better software delivery
Thomas Delaet: If you look at the brownfield situations, what kind of issues typically make software delivery inefficient?
Erik Schmiegelow: There’s often a disconnect between teams and product development, as well as between teams and the environment in which they operate. Siloed development processes present a significant problem.
That leads to situations where the product owners chuck something over the fence to developers, who do their thing and then chuck it over to quality assurance, who try to make sense of it, may or may not catch something, and chuck it over to operators, who put it into production. There is no communication whatsoever except for chucking things over the fence, and there is no feedback. So nobody knows what’s happening until it actually hits the market and customers start using it, which is a real problem. It’s not necessary to have these siloed approaches where nobody serves anyone except maybe an organizational matrix.
Another typical problem occurs when decisions are disconnected from their impact. Say a unit decides about a stack or a specific process, but they’re ultimately not the ones that have to implement it. Nobody can question the validity of that decision, because it’s completely detached, everybody’s a cog in a big machine, and nobody sees the big picture.
Fast-delivery teams that are self-organized and in control of the entire value chain, from product specification and ownership to delivery, can help. They can make decisions about how they run and deploy things while getting maximum feedback from users and the market. Reducing interference also effectively guards against friction with other units within the organization, because organizations tend to impose things on teams without necessarily validating workflow impact.
In our experience, the source of all these issues is typically an organization with a specific structure that is not fit for purpose, usually because there’s historic development behind it—especially with incumbent companies that have been around for quite a while. They were designed before the digital age.
Paper-based processes are a perfect example. Why are digital-transformation initiatives so hard? Because, essentially, business processes are still designed as if everything were still on paper. And especially when you interact with government bodies, the digitalization approach is to basically convert paper forms to PDFs that you can email, which is absolutely nonsensical because it takes neither the operating environment into account nor the impact on users.
The pros and cons of public cloud
Thomas Delaet: If you think about transforming the way you do software delivery, what role does public cloud play?
Erik Schmiegelow: Organizations evaluating whether to go on-premises or on public cloud need to take a comprehensive look at the cost aspects of running the infrastructure. What you might save on infrastructure and hardware you’re going to spend on system operators, even if you use things like OpenShift and Kubernetes to streamline resource utilization to make things cheaper. With the pure on-premises scenario, you may save on public cloud operating hours, but you may end up spending more on hardware.
The savings through application migrations aren’t always evident either. In the lift-and-shift approach, for example, the benefits of using public cloud are minimal. You can save money on things like databases, because you can basically migrate to managed databases and thus reduce operating overhead and get backups built in, but otherwise, the cost advantages are limited.
That’s why cloud-migration projects should always be about remediating legacy applications to benefit from cloud-native services or integrating legacy applications with new implementations that take advantage of the elasticity and autoscaling features of public cloud. This requires some changes to the team in charge, too—cloud-native application development requires fast-delivery approaches and a shift in application design.
Otherwise, even if you migrate and rewrite a new application, you’ll be using teams that are used to the legacy environment’s approaches, and they will end up building the new application with the same architecture as before. This will negate any benefits in terms of speed of execution and flexibility you would get with cloud-native environments.
Evolving gen AI use cases
Thomas Delaet: How can the gen AI use cases developing around developer productivity help with everything we’ve discussed? Where do you see the biggest bang for the buck?
Erik Schmiegelow: The best current application of gen AI is assistive coding, because it turbocharges productivity by reducing all the boring stuff. If you look at the prototypical day of a developer, 30 percent is real coding, 70 percent is debugging, and half of that first 30 percent is doing stuff that’s annoying, repetitive, and boilerplate. Generative AI significantly increases developer productivity by reducing that bit massively and giving more certainty by cutting down on research time. Soon, general LLMs [large language models] will be able to support conversational coding sessions for productive use in which developers describe their needs and let the model create most of the structural code for the application, along with test automation. At the moment, it’s not quite there yet. Even so, that’s relatively low-hanging fruit, but it’s still significantly improving the experience and productivity of teams.
The more interesting use case for gen AI lies in that soft spot where you combine enterprise data repositories with query use cases using a technique called retrieval-augmented generation (RAG). This tackles the issue of hallucination in models as well as the data set training problem by combining the LLM for natural-language understanding with a vectorized data repository for retrieval. In this way, the model can be more accurate because it’s focused on the right data and addresses privacy concerns. The use cases for RAG are plentiful. Take the insurance industry, for instance. It has large sets of unstructured documents, especially policy documents. A lot of them are digitized but originated on paper. It’s very difficult to do reverse analytics or reprocessing of policies without involving lots of human beings looking at PDFs or pieces of paper and retyping what’s on paper.
Generative AI can massively improve that process by extracting the necessary attributes from the unstructured documents and rendering them in the desired form. This can turn extremely valuable information from old, unstructured policy documents into structured documents and make them as accessible as new ones. This is a process that would otherwise require a lot of human interaction at significant cost.
A second area where generative AI can make a large impact is everything involving “fuzzy” matching. If we look at current business processes, there are quite a lot of workflows with interruptions of machine-to-machine flows by required human interaction in between to validate, check, or execute other data entry tasks. Machines are traditionally pretty bad at assessing and summarizing text, which completely changes with LLMs.
In such a scenario, you can leverage LLMs’ capabilities by preserving the workflows but increasing throughput with a model pretrained on the enterprise data sets to make suggestions, rather than relying on data entry clerks to process these, thus reducing the scope to reviewing suggestions made by the model. You can eventually bypass the manual steps entirely when the model is sufficiently trained and supervised for fully automated data processing. Both of those are the real sweet spots of enterprise application of generative AI. The list of potential use cases is quite expansive, most significantly where data extraction, summarization, and matching are significant parts of the business process.