Accelerating enterprise transformation with DevOpsCigniti Technologies
Listen on the go!
With the new normal shaping up, transformation – whether it is technological, operational, or cultural – has become inevitable, and to an extent, essential for survival.
The ability of an enterprise to quickly get back on track to mitigate the impact of the pandemic has proven substantial in predicting its capability to develop resilience during these tough times and to continue delivering services to the end customers with the same or higher level of satisfaction.
Since the transformation required to achieve business continuity extends to not only the operational perspectives but cultural aspects also, DevOps adoption has become the go-to solution.
DevOps stretches way beyond just simply bringing the two functions together. It inculcates the practices, processes, and culture that have become the need of the hour.
Aimee Bechtle, the Head of Global Market Intelligence, DevOps, and cloud platform engineering enablement group at S&P, recently spoke on our podcast about the imperative role of DevOps in driving enterprise transformation. Having extensive experience as a transformational change agent and with a deep perspective on how to make technology and cultural transformations successful, Aimee offered critical DevOps-related insights. This blog is an excerpt from her discussion on the QATalks podcast.
Trends related to the evolution of DevOps
Since the conception of DevOps, there has been significant evolution in terms of what it stands for as well as how enterprises and people understand and embrace it. The gap between the actual meaning of DevOps and its understanding led to many failed adoption across industries. But as this understanding evolves, the gap is narrowing, and more and more organizations are now adopting the DevOps culture successfully.
Talking about some of the trends that Aimee has seen over the years regarding the evolution of DevOps, she said – “I’m seeing more trends to applying what I call the DevOps principles or fundamentals of DevOps, where they’re driving development teams to have operational functions and own the product from cradle to grave, from code commit to operations or production, and seeing the team typologies change and moving beyond just implementing pipelines to changing the responsibility and accountabilities of the teams, and then moving architecture to being microservices, and really decoupling and slaying that model lift to allow them to go faster.”
While Aimee spoke about the DevOps fundamentals, she also noted the need of incorporating test automation into DevOps – “I would say that test automation is especially a topic that is considered “nice to do” a few years back in the context of Agile and DevOps. Without actually having test automation into an overall plan, you simply can’t make those initiatives work because you are trying to deliver these incremental changes faster into production as quickly as possible. And to be able to do that, you need a mechanism to be able to test not only the changes that are introduced, but also the entire system. That’s where the whole test automation kicks in. What we’ve seen is in many of the large software projects that are being undertaken, test automation is very critical almost to the extent that without having test automation, you simply can’t deliver this project.”
Balancing speed with security
For organizations, especially those working in highly regulated environments, security is of prime concern. But at the same time, speed is essential too. They can neither compromise on speed nor security and therefore, they need to achieve equilibrium between these two aspects.
With her specialization in Agile-DevOps in the cloud in highly regulated financial environments, Aimee notes how this balance can be achieved – “I’m seeing an increase in tools and technologies that you can bake into a pipeline to stop builds and bad code from going into production. I’m seeing more coding practices and the focus on application security with secure code reviews and educational programs being put in place to train developers on that. And more focus on the Ransomware lately, especially in the cloud and what we’re doing in the cloud to protect ourselves from Ransomware”.
She shares a few tactical practices that enterprises can incorporate in their day-to-day operations to manage the balancing act between speed, security, and compliance –
“One of the first things we do is we make sure that in our backlogs we give time and budget to non-feature delivery work. That is number one. At least 20 to 30% of your backlog should be focused on enabling work security remediation. And in security, we really focus in our delivery pipelines on doing the static security analysis, the dynamic security scanning on our repositories, and our artifact management systems to make sure that there’s nothing in there that we would put out. We also are proactive with how we maintain container images and machine images to make sure that they are compliant. We patch regularly, we’re always looking at least privilege principles and making sure that we are giving role-based access, and really locking down production. I think one of the biggest things in DevOps and continuous delivery is adhering to the segregation of duties and SOX compliance. And that’s how it looks like with continues delivery or deployment where you don’t want to hand off to somebody for that approval to meet that requirement and looking at how we can use source control and gitops. Being familiar with gitops principles to meet segregation of duties and that merge approval with all of the evidence is considered segregation of duties”.
When QA is outsourced, security is usually one of the top concerns for enterprises. Some of the key practices undertaken at Cigniti to ensure top-notch security and fast speed for the client –
“At each engagement that we do, depending on its nature, we make sure that we implement multiple layers of security. We have projects that are highly sensitive where the whole network is segregated, which means that nobody from outside or the ones within the network can communicate with each other so there is no transmission of information that can happen. Also in some cases, even the physical environment is segregated, meaning that only people that have an access card can have access to that room or the floor. And in some of the very sensitive areas that we are working with, we do not allow people to carry any devices with them that can record or transmit information like mobile devices. So they have to leave them outside of those project areas and do whatever they have to do, and then pick it up once they come back. In addition to that, we also have constant surveillance cameras that are running so they’re constantly being taped and to make sure that there’s nothing that’s happening which can cause any harm to the IP of the clients.
Once you set up those best practices in place, you are actually preventing people to do anything that they’re not supposed to do. One of the other aspects is that of data. In many large organizations, you simply make a copy of your production when they’re done and then use it in the test data. Which is okay, but in the heavily regulated environments, it’s a complete no-no. So then you have to have a mechanism like products or utilities that you have to build so that you mock the data in a way that it still has the overall structure of the production data, but then you can’t pinpoint to a particular person or a transaction and things like that.”
The shift from monoliths to microservices
As DevOps gets widely adopted worldwide, modern architectural practices are being embraced as well. To accelerate their transformation initiatives, enterprises are moving away from their legacy monolith architecture to loosely-coupled microservice architecture.
Speaking about the QA practice for microservices –
“If you look at a lot of test strategies that generally people make, they’re very well suited for monolithic applications. Because as you see the evolution, basically all these are apps that are made of a monolithic architecture, they’re relatively easy to test. And if you look at microservices architecture, it is all about a combination of different services coming together dynamically and offering functionality to the end user. And these different services have to be well–coordinated in real time. What it means is that every different microservice that you have, they’re standalone by nature, meaning that they accept certain inputs, and then they provide certain inputs. So it is less about what is in them, but then it is more about what they can provide to the consumer of these services. So each and every service has to be tested in isolation as if it is one big software program. And that many times the owner of the microservice probably can’t even imagine how this service is going to be used by the consumer of the service. There are a lot of tests that need to be done, in terms of what the service can do, but also checking work it shouldn’t be doing and building in these failure scenarios is very important. I think that’s where the test automation will come into play and the ability to test these multiple different services in different combinations plays a huge role. And also sometimes not all these services are available to you in real time, that’s when you have to do some mock-ups for these different services, because you still want to verify the entire business process functionality even though the service is not available to you.”
How can we help
At Cigniti, we standardize efforts and ensure accelerated time-to-market with DevOps testing solutions. We focus on delivering improved deployment quality with greater operational efficiency. Our DevOps testing specialists with their deep experience in Continuous Integration (CI) testing & Continuous Deployment (CD) help configure & execute popular CI/CD tools supporting your DevOps transformation & application testing efforts.
Reach out to us for a detailed discussion.
Leave a Reply