This article is more than 1 year old

Bringing discipline to development, without causing pain

What happens when young developers meet old business

Who cares? DevOps can fix it, right?

You might be wondering at this point, why does all this matter anyway?

We live in a world of DevOps where, even if we end up with large binary assets outside the repository or divided among multiple repositories, we can still coalesce and unify these elements through DevOps magic for builds, testing and onward to releases. Opponents of Distributed Version Control Systems (DVCS) would argue that this is a Band Aid over a bigger problem and that keeping all the assets in one single store is a more prudent method overall.

Indeed, the so-called ‘single source of truth’ dictum is popularised and championed by Agile development with a CAPS A. The difficulty is, quite simply, keeping that single source of truth in one single location in a world where multi-location, multi-tool, multi-disciplinary teams work to a multiplicity of software application requirements with a multiplicity of software assets.

The truth is (and don’t shout it out too loud), that source code is often a tiny drop in the binary bucket compared to documents, images, models, audio, video, even entire virtual-machine (VM) environments for the sake of testing and deployment.

This expansion of assets, it is argued, poses a serious challenge for enterprise Git adoption because the design of its internal file system mandates a practical maximum repository size of a gigabyte or two at most. Even repositories that fall far short of the practical limits can exhibit relatively poor performance, depending on the type and size of assets and operation at hand.

A choice of real world solutions

According to Mark Warren, product marketing director at Perforce, the options here include routes such as ‘narrow cloning’ - a method that while highly desirable, has been pretty much impossible until now. The developer wants to only take the bits they need without getting bogged down or confused by having all the code all the time (e.g. an iOS client developer probably doesn’t care what the Windows Phone client is doing); so a process of narrow cloning allows the coder to pick and mix and mix and match from a cloned selection pack as and when needed.

“The goal here is to keep all the bits of an app in one place,” argues Warren. “The coder might be writing beautiful JavaScript but unless the build system can find the right third party binary for the payment handling system or include the latest graphics from the designers, then builds are likely to fail and that means wasting time until the build manager or DevOps engineer sorts out the mess. Hence the need for all assets, not just source code, in the shared repository.”

It’s mono-repo man

Warren logically points to the rise of the mono-repo, as in single repository. He says that a lot of teams are discovering the joys of the mono-repo i.e. all the application’s source files exist in one place so that all ‘stakeholders’ can see everything and fully understand the impact of a change (or, indeed, propagate the changes across all projects at the same time).

“This is a very powerful proposition,” argues Warren. “However, using a tool like Git, (which was never intended to hold these gigabytes or terabytes of code and assets) is a real handicap. Unnatural acts are needed to partition up the repo and then to make it behave usefully later, workflows have to adapt to these artificial limits rather than having the tools do what the coders need. Hence the need for an effectively infinitely scalable master repo in Helix, which is especially useful when combined with narrow cloning.”

Where do we go from here?

Unfortunately there are more challenges ahead. If we get past some basic resolution on our approach to cloning and branch management, then have we provisioned for disaster recovery and high availability in the longer term? If we’re going back to the individual developer level (remember where this argument first started?) then the answer is probably going to be no, isn’t it? But we’re here to discuss the technical bridging challenge between developer freedom and enterprise requirements, so disaster recovery and availability does have to be tabled.

Should the development shop employ standby Virtual Machines (VMs) as a means of mirroring changes between file systems so that storage can be swopped out as needed to provide that disaster recovery backbone? The individual developer doesn’t care so much, so the enterprise had better make sure that it does. What about dashboard controls for higher level project management? What about authentication and security concerns. We haven’t even gone there yet.

The realities we can surmise from this discussion are that developers left in the wild will obviously work differently to the way they will work inside more regimented enterprise development shop systems.

We can also agree that a new breed of architectural-level software development tools and delivery methodologies is developing in response to the need to span both functionality and control.

We can also agree the question of whether a light speed car’s lights would turn on contradicts Einstein's special theory of relativity. This means that no object with mass is capable of traveling at or faster than the speed of light due to the object’s resistance to acceleration and the impossibly infinite force that would be needed to be exerted. You see, developers are (almost) always right, so the enterprise had better get used to working with that mindset. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like