cloud quantum many code legacy get
Macro trends in the tech industry | Nov 2019The Technology Radar is a snapshot of things that we’ve recently encountered,the stuff that’s piqued our interest. But the act of creating the Radar alsomeans we have a bunch of fascinating discussions that can’t always be capturedas blips or themes. Here’s our latest look into what’s happening in the worldof software.### Race for cloud supremacy resulting in too many feature false startsAs I’ve written about previously, Cloud is the dominant infrastructure and architectural style in the industrytoday, and the major cloud vendors are in a constant fight to build marketshare and gain a leg up over their competitors. This has led them to pushfeatures to the market — in our opinion — before those features and serviceswere really ready for prime time. This is a pattern we’ve seen many times overin the past, where enterprise software vendors would market their product ashaving more features than a competitor, whether or not those features wereactually complete and available in the product. This isn’t a new problem, perse, but it is a fundamental challenge with today’s cloud landscape. It’s alsonot an accident — this is a deliberate strategy and a consequence of how thecloud companies have structured themselves to get software out of the doorreally fast.The race by each cloud platform to deliver new products and services isn’tnecessarily going to create good outcomes for the teams using them. Thevendors over-promise, so it’s “buyer beware” for our teams. When there’s a newcloud database or other service, it’s critical that teams evaluate whethersomething is actually ready for their use. Can the team live with theinevitable rough edges and limitations?### Hybrid cloud tooling starts to take shapeMany large organizations are in a “hybrid cloud” situation where they havemore than one cloud provider in use. The choice to use a single provider ormultiple providers is complex and involves not just technology but alsocommercial, political and even regulatory considerations. For example,organizations in highly regulated industries may need to prove to a regulatorthat they could easily move to a new cloud provider should their currentprovider suffer some kind of catastrophic technical or commercial problem thatrendered them no longer a going concern. Some of our clients are undertakingsignificant cloud consolidation work to transition to a single cloud platform,because being on multiple clouds is problematic due to latency, complexity ofVPN setup, a desire to consolidate in order to get better pricing from thevendor, or for cloud-specific features such as Kubernetes support or access toparticular machine learning algorithms.Such transitions or consolidationscould take years, especially when you consider how legacy on-premise assetsmay factor into the plan, so organizations need a better way to deal withmultiple clouds. A number of “hybrid cloud control planes” are springing upthat may help ease the pain. We think Google Anthos,AWS Outposts,Azure ArcandAzure Stackare worth looking at if you’re struggling with multiple clouds.### “Quantum-ready” could be next year’s strategic playGoogle recently trumpeted its achievement in so-called “quantum supremacy” — it has built a quantum computer that can run an algorithm that would beessentially intractable on a classical computer. In this particular case,Google used a 53 qubit quantum computer to solve a problem in 200 seconds thatwould take a classical supercomputer 10,000 years (IBM has disputed the claims, and says its supercomputer could achieve the result in 2.5 days). The keypoint is to show that quantum computers are more than just an expensive toy ina lab, and that there are no hidden barriers to quantum computing solvingimportant, larger-sized problems.For now, the problems solvable with a small number of qubits are limited innumber and usefulness, but quantum is clearly on the horizon. Canadian startupXanaduis developing not just quantum chips — using a ‘photonic’ approach to capturequantum effects as opposed to Google’s use of superconductors — but alsoquantum simulation and training tools. They point out that even though mostquantum algorithms today seem a bit theoretical, you can use quantumtechniques to speed up problems such as Monte Carlo simulation, somethingthat’s very useful today in fields such as FinTech.As with many technology shifts (big data, blockchain, machine learning) it’simportant to at least have a passing familiarity with the technology and whatit might do for your business. IBM,MicrosoftandGoogleall provide tools to simulate quantum computers, as well as in some casesaccess to real quantum computing hardware. While your organization may not(yet) be able to take advantage of highly specific algorithmic speedups“Quantum-ready developer” could soon become popular in the way “datascientist” has in the past.### 90% decommissioned is 0% savedAs an industry, IT constantly faces the pressure of legacy systems. Ifsomething is old, it might not be adaptable enough for today’s fast pace ofchange, too expensive to maintain, or just plain risky — creaky systemsrunning on eBay’d hardware can be a big liability. As IT professionals weconstantly need to deal with, and eventually retire, legacy systems. One cool-sounding approach to legacy replacement is the Strangler Fig Application, where we build around and augment a legacy system, intending to eventuallyretire it completely. This pattern gets a lot of attention, not least due tothe violent-sounding name — many people would like to do violence to some ofthese frustrating older systems, so you tend to get a lot of support for astrategy that involves “strangling” one of them.The problem comes when weclaim to be strangling the legacy system, but end up just building extrasystems and APIs on top. We never actually retire the legacy. Our colleagueJonny LeRoy (famed for his ability to name things) suggested that we put “neckmassage for legacy systems” on ‘Hold.’ We felt the blip was too complex forthe Radar, but people liked the message: if we plan to retire a legacy systemusing the strangler pattern, we better actually get around to that retirementor often the whole justification for our efforts falls apart.### Trunk-based development seems to be losing the fightWe’ve campaigned for years that trunk-based development, where every developercommits their code directly to a “main line” of source control (and does sodaily or better) is the best way to create software. As someone who’s seen alot of source code messes, I can tell you that branching isnot free (or even cheap) and that even fancy code merging with tools such asGit don’t save a team from the problems caused by a “continuous isolation” style of development. The usual reasons given for wanting code branches areactually signs of deeper problems with a team or a system architecture, andshould be solved directly instead of using code branches. For example, if youdon’t trust certain developers to commit code to your project and you usebranches or pull requests as a code review mechanism, maybe you should fix thecore trust issue instead. If you’re not sure you’re going to hit a projectdeadline and want to use branches to “cherry pick” changes for a releasecandidate, you’re in a world of hurt and should fix your estimation,prioritization and project management problems rather than using branches toband-aid the problem.Unfortunately, we seem to be losing the fight on this one. Short-livedbranching techniques such as GitFlow continue to gain traction, as does theuse of pull requests for governance activities such as code review. Ourerstwhile colleague, Paul Hammant, who created and maintainstrunkbaseddevelopment.comhas (grudgingly, I hope!) included short-lived feature branches as arecommendation for how to do trunk-based development at scale. We’re a littleglum that our favored technique seems to be losing the fight, but we hopelike-minded teams will continue to push for sane, trunk-based developmentwhere possible.### XR is waiting for AppleAt the recent Facebook Connect conference, Oculus confirmed they are workingon AR glassesbut didn’t have anything specific to announce. The most recent leaks andrumors suggest thatApple will launch an XR headset of some kind in 2020, with AR glasses planned for 2022. As with many other advances such as thesmartphone and smartwatch, Apple will probably lead the way when it comes tocreating really compelling experience design. Apple’s magic has always been tocombine engineering advancements with a great consumer experience, and itdoesn’t enter a market until it can truly do that. For a long time (and maybestill today) Apple’sHuman Interface Design guidelineshave been required reading for anyone building an app. I expect a similar leapforward will be taken when Apple (eventually) get into the AR space. Untilthen, while we have some nifty demos and some limited training experiences, XRis going to remain a bit of a niche technology.### Machine learning continues to amaze and astonish, but do we understand it?One of my favourite YouTube channels is Two Minute Papersin which researcher Károly Zsolnai-Fehér provides mind-blowing reporting onadvances in AI systems. Recently the channel has featured AI that canmimic a human voice given just five seconds of input, AI that caninfer game physics30,000 times faster than a traditional physical simulation, andAI that learns to play Hide and Seekand literally breaks the rules of the game world within which it’s playing.The channel does a great job of showing the amazing (and slightly scary)advancements in narrow-AI capability, usually for problems that can bevisualized and make for good videos. But machine learning is also beingapplied to many other fields such as business decision making, medicine, andeven advising judges on sentencing criminals, so it’s important that weunderstand how an AI or machine learning system works.One big problem is that although we can describe what an underlying algorithmis doing (for example how back propagation of a neural network works) we can’texplain what thenetwork actually does once trained. This Radar features tools such aswhat-ifand techniques such asethical bias testing. We think thatexplainability should be a first-class concernwhen choosing a machine learning model.### Mechanical sympathy comes around againBack in 2012, the Radar featured a concept called “mechanical sympathy” based on the work of theLMAX Disruptorteam. At a time when many software applications were being written at anincreasing level of abstraction, Disruptor got closer to the metal, beingtuned for extremely high performance on specific Intel CPUs. The LMAX problemwas inherently single threaded, and they needed high performance from single-CPU machines. It seems like mechanical sympathy is having something of aresurgence.Last Radar we featured Humio, a log aggregation tool built to be super fast at both log aggregation andquerying. This Radar, we’re featuringGraalVM, a high performance virtual machine. We think it’s ironic that much of theprogress in the software industry is getting things away from the hardware(containers, Kubernetes, Functions-as-a-Service, databases-in-the-cloud, andso on) and yet others are hyper focused on the hardware on which we’rerunning. I guess it depends on the use-case. Do you need scale and elasticity?Then get away from the hardware and get to cloud. Do you have a very specificuse-case like high-frequency trading? Then get closer to the hardware withsome of these techniques.I hope you’ve enjoyed this lightning tour of current trends in the techindustry. There are some others that I didn’t have room for, but if you’reinterested in software development as a team sport, or in protecting thesoftware supply chain, you can read about those in the Radar.###