Cloud has undoubtedly become a key component of successful business in recent years, especially when you consider the race to digitally transform. Across the globe, companies are moving their applications and services to the cloud and are consequently reaping the benefits of lower capex and opex as a result.
However, with this process, cloud migration is only a beginning for any organization’s digital transformation (DX) journey. If harnessed correctly, cloud is a pillar of innovation for DX, and can be a driving force for new business models and use cases that – even a few years ago – weren’t possible. No one knows this better than devops teams; these teams hold the line when it comes to continuous delivery and deployment, and it therefore stands to reason that devops play a crucial role in the digital transformation journey. In practice however, the decision makers in charge of cloud strategies are rarely those in the bowels of the ship.
Therefore, for a successful DX, you need a devops team that is agile, efficient, and able to produce a higher quality software at a higher speed, whilst working in a collaborative environment with quality assurance (QA), security, development and IT ops teams.
Devops grows like any other role within DX
At a high level, the role of devops in any company is to produce new software based on business needs at very high velocity, and within any constraints, at the highest quality possible. High speed is associated here with a continuous delivery pipeline which in extreme cases can potentially mean several new releases per day; requiring several cycles of code to be built, tested and integrated before deployment. High quality plays its role here by providing excellent customer experience based on responsive and reliable services with, where possible, virtually zero downtime.
The main challenge that these development teams face within a company is the level of devops maturity. Similarly to the principles of the Software Capability Maturity Model (SW-CMM) and the IDEAL model introduced by the Software Engineering Institute at Carnegie Mellon University, devops maturity is affected by two main principles. The first is the cultural dimension, which shows the need to collaborate effectively, and to own the mission rather than meeting function-centric objectives, such as telemetry that is specific to Operations or QA. Secondly, the level of holistic visibility and situational awareness, based on telemetry and KPIs relevant for the entire devops organization across all functional teams. The quality of this holistic visibility and situational awareness would depend on both the instrumentation technology and the pervasiveness of its deployment.
All factors such as visibility, telemetry, feedback loop and situational awareness become important when the teams have mastered the first element of maturity. However, ahead of this, developers tend to be focused on KPIs relevant predominantly to their specific function, such as the number of new releases in a day by the development team. QA will have use cases and will test based on them whilst Operations need to monitor the application and service performance in production environment. In short, everyone is focused on their own fiefdoms, often siloed off from one another. An immature devops organization is focused on accelerating and optimizing their own domains with various technologies, instead of establishing an effective feedback loop, end-to-end visibility, and most importantly common situational awareness. It is this situational awareness that is absolutely crucial in mature devops teams, and a crucial element of devops that separates the wheat from the chaff.
Smart data lights the way for devops teams
By way of demonstration, a typical development cycle would take place thusly: developers will develop the code and build it out. It will then get sent to QA who will test it, before it goes to the release manager who oversees the integration into the mainline and its deployment. At this point, Operations may flag a software issue that only manifests itself at scale. This will then mean that the Dev teams will have to identify the issue extremely rapidly and develop new code that addresses this issue and functions correctly in the product environment. All these areas are siloed off and only have visibility of their own space.
Visibility is a crucial part of this entire process and, to make things even more streamlined, the common situational awareness that visibility affords all teams is critical in the modern development environment. Instead of Dev teams relying on Ops to highlight problems, for example, they can look on the system themselves and see the same situation and know which parameters that they need to work within. This not only saves time but makes feedback loops significantly more effective.
At the centre of visibility is smart data – metadata that is based on processing and organizing wire-data at the collection point and optimising it for analytics at the highest quality and speed – and this smart data is crucial for this kind of visibility. Continuous monitoring with smart data is based on analysis of every IP packet that comes across the network in real time and uses that information to deliver actionable and meaningful insights – whilst creating a common situational awareness for all teams. By way of comparison, log data is analysed after the events were logged and the log files were collected from multiple servers in a central location, which is a different approach to monitoring. With smart data, the provision of condensed, actionable and intelligent datasets based on real-time IP traffic analysis, all teams across the company from QA through to Ops and Dev, can work together in far greater harmony by continuously monitoring evolving leading business indicators, avoiding any bottlenecks in the feedback loop and solving issues in real-time. This is the devops dream.
This visibility is in sharper focus with regard to security as part of a fully-fledged DevSecOps organization in which the security engineers work side by side with developers to assure application security. As with log data, analysing a breach after the fact is advised and a critical activity, but knowing in real time which application flaws have been exploited is considerably better practice to input into the feedback loop from Ops to Dev and to Sec. This way, security issues can be dealt with more effectively at the source. If combined with automation, this process will help to create far more secure applications, bypassing the need to have war rooms at all, and aiding in the mitigation of any potential damage to corporate reputation.
DX, and cloud in particular, are integral aspects of innovation and wider business transformation. They do, however, bring with them a number of new and uncharted challenges. However, by breaking down departmental siloes of work, providing a complete common situational awareness, and a culture that champions collaboration, DX can be a success for organizations beyond what they may have expected. Through this practice, maintaining a competitive edge in a market that is constantly being disrupted will be easier and more fruitful than ever. For the DevSecOps teams that are in the bowels of the ship, it will be a never-ending journey of producing secure and high-quality code at speed, whilst continuously evolving the maturity of DevOps practices in the organization through the utilization of smart data. Mature DevSecOps organizations would depend on relevant telemetry and common situational awareness, and only through its continued use will the ship be kept sailing true with wind in its sails.
This article is published as part of the IDG Contributor Network. Want to Join?