testing
BLOG
QUALITY ENGINEERING & TESTING

DevOps: Test at every point in the lifecycle

Sogeti UK's Andrew Fullen states how "one firm gave developers cricket bats to threaten testers into getting their code into production faster"

~ Written By Stuart Sumner

When trying to implement a DevOps culture within an organisation, it's important to test at every point in the lifecycle.

That's the opinion of Andrew Fullen, solution director at Sogeti UK, who described the philosophy as "contiunuous testing".

"Testing fits in every point in the lifecycle," Fullen began. "Every time you make a decision you need to prove it's working."

That testing doesn't need to be onerous, Fullen added, but he explained that it does need to be used, validated, recorded and auditable, and that organisations need some form of automated testing at every point.

"It needs to be part of the fabric of your entire structure. Start adding tools piecemeal, and you'll get into trouble," he said.

He then described his vision of what an organisation with a strong record of continuous testing would look like.

"Every time you make a decision, before you put any code together, you capture the idea and the requirements, and there should be no differentiation between functional and non-functional testing. If something's secure, that's an important functional requirement. Is it learnable, useable, can people with poor eyesight use it? These are all important functional requirements," said Fullen.

But the problem, he continued, is time. Fullen explained that in the past, organisations used to run projects over longer periods than today, with even two to three year projects reasonably common.

"Now you have one or two week sprints, and in the future this idea of continuous delivery will kick off more, or you could even take the Amazon model of releasing every 13 seconds. Not all of those changes are good though. Anyone using AWS recently saw that they can have issues too, despite all their automation and checks."

Fullen then gave an example of a firm he worked with in the past, where the senior stakeholders decided to automate everything.

"Requirments gathering, approval, coding, security checks, repositories, backups, APIs, everything was automated," he said. "You name it, if there was a tool they bought it, if there wasn't they wrote it. It was brilliant, everything was automated,  and it all ran on this lovely rig which was a phenomenol sight with these flashing and blinking lights."

But whilst many aspects of this system were impressive, there were problems too, as he described.

"Then our old friend time came back to haunt us. The weekly build started on a Friday night, and finished at 8am on a Monday morning. Then the testers would come in and spend two weeks working out what actually happened over the weekend. There were hundreds of thousands of log files, all in different formats, created by different tools, and with no centralised reporting because that was pretty much the only thing which hadn't been automated. And some of the most critical things were left until last in the automation chain."

He explained that developers were measuered on how quickly they made changes and got them out into the live environment. To this end, according to Fullen, management went so far as to buy cricket bats for those teams so they could threaten, one hopes jokingly, the testers, and get them to rush their work so as not to delay the code.

"The developers and testers were mostly married to one another too, so home time wasn't a good situation either!" he joked.

If anything went wrong with a release, it would take two weeks to find the log file which would enable them to find and fix the problem, and with malfunctioning code in the live environment, obviously further releases would be delayed.

"We decided to take time out," said Fullen. "We did a two day off-site meeting to work out where we'd gone wrong and what we could do to sort it out. We spent all day breaking it all down into simple problem statements, sticking them all on a wall and working out what was important and when it needed to run."

It was a data scientist working at the firm who came up with the solution, Fullen added.

"He took a baseline and analysed it to see what everything was doing. Then we just needed to look at what was different in each release [rather than poring over hundreds of thousands of log files]. So we wrote a Perl script and tested only those things which were different, all controlled by a simple SQL database. And we could also change the baseline if the difference was actually how it should have been working."

He explained that the firm uses many different operating systems, including various versions of Windows, Linux, Unix, IBM Mainframes and Macs.

"Certain things if they broke on the Windows base would also break everywhere, so you only had to fix one problem for loads of other issues to resolve. It was like a funnel system, so there were fewer things to test as you went along," said Fullen.

Using the baseline concept, they were able to perform 14 million tests in an hour period, all automated.

"The number of people needed [for testing] was reduced. The testers and devs started working very closely, with the devs writing tests and the testers starting to write code. Every test told you the ideal state of its behaviour, and everything was in source control."

He continued: "Environments were built and destroyed on demand. We had lots of installation and uninstallation tests, and upgrades and downgrades. Every single data point generated we were capturing. Everything was monitored. We were able to identify if a dev was checking in code which was high risk or low risk. We could also see where there was no risk so we could then optimise those runs. We could tell with high 90 per cent accuracy who was going to find what defects in what week. Judging by normal behaviour, we could guess who would find lots of low priority defects but they would cluster, and behind that cluster was a bigger problem we'd find upstream."

They were also able to see which developers would regularly produce bullet-proof code, which was something the teams were competitive about in the organisation.

"We had 3,000 engineers who got to see who were the best devs, engineers and testers. But the teams were all measured together, so they were only as good as their team mates. They either got or didn't get their bonus together, so it was in their interests to work together efficiently," he said.

To read the original article visit Computing's website.

contact us
  • Sogeti UK
    Sogeti UK
    Make an enquiry
    0330 588 8000