How to test with agility and confidence?

In a context of evolving market conditions and accelerated time-to-market, how is it possible to ensure the quality of IT applications with agility and confidence?

In this interview, Jean François Poilprêt (JP) - Senior Manager and Benjamin Audren (BA) – Testing Consultant from the Test Team at ELCA, describe how automated tests and continuous integration enable continuous improvement of software quality with short development cycles.

In a context of accelerated time-to-market – How is it possible to ensure the quality of IT applications?

JP - This has become a major challenge in IT nowadays! Yes, due to closer involvement in defining the final product, they want to see results early and they want fast delivery so that end users can get benefits as soon as possible.

With such expectations, new approaches in software development lifecycles have flourished, in particular all "agile" methodologies (Extreme Programming, SCRUM). ELCA has developed and is now using its own AgileIT methodology to meet customers’ new expectations. Agile methodologies impose short development cycles (from 2 weeks to 6 weeks) – called sprints. The rationale of a sprint is that it must be able to deliver a runnable system with all features selected for it.

This all looks very challenging to deliver in such a short time, how can you then keep focus on quality?

JP - Such a short duration for a complete development lifecycle has imposed some new ways to organize the project team and make testing more efficient. Indeed, one sprint is focused on a set of features (a.k.a. "user stories") but must deliver a running system at its end, including all features from the previous sprints. On the one hand, without proper testing, the risk of regression (i.e., a previously working feature no longer works) is high; on the other hand, testing all features again and again (once per sprint) can be costly and time-inefficient. This is where test automation comes in: automating the tests for one feature, although more costly than manually testing this feature once, quickly provides a return on investment after executing it a few times, since later executions have no inherent cost and evolutions can ship faster!

 

Automated testing looks good, but how do you ensure daily quality of what is being developed?

JP - In order to ensure sprint efficiency, agile methodologies also favour defects discovery and fixing as soon as possible; this has become possible thanks to Continuous Integration, where each time a team member has finished developing a feature, he adds the source code for it to the overall project source code (stored in a "source repository"), and then a dedicated machine is going to (1) first get the latest source code for the whole project, (2) then build the application from that source code, (3) deploy the application to a dedicated environment where it can be tested, and (4) finally run all automated tests on the newly deployed application.

When this is finished, a notification is sent to the project team about the results; in particular, if a failure occurred during continuous integration, the developer who is responsible for the failure (i.e., the one who added source code last; this person is said to "have broken the build"), is in charge of fixing the failure ASAP. This way of working ensures that the latest source code always generates a runnable system. Also, being able to fix a defect as soon as it appears is much easier than fixing it much later, in particular if it is discovered when the application is in production.

 

Is it possible to automate all the tests activities for a given application?

BA - It is always tempting to answer "yes" to such a question, especially when dealing with teams crushed under extremely heavy manual testing, and looking for an ultimate solution to their problems. In practice, it is useful to make a distinction between manipulating the application, and testing it. It is in most cases possible to pilot the application in all the desired ways, to simulate how a user would interact with it. Depending on the complexity of the application, and the different interactions it can have with other programs on the machine, this automatic pilot can be time-consuming to write and maintain. However, testing is not only manipulating the application: it is the process of understanding its current state, and finding any way in which it does not perform properly. Checking is an important part of this process, and is something that can be easily automated because of its explicit nature (these numbers should be the same, this user should not have the rights to do that, etc…). Checking only verifies current expectations given some specifications, but does not guarantee that the system actually works. Testing should also englobe investigating, asking questions, exploring the system, and may have open-ended results. This so-called "exploratory testing" is an essentially intellectual task, which may be helped by automation tools – very much like programming can be helped by a good Integrated Development Environment that does part of the heavy-lifting effort. By automating away the repetitive and error-prone parts, the agile team can better leverage the intrinsic knowledge of the application coming from the developers, and achieve better quality.

How do you arbitrate on what can be or should be automated?

JP - That is a one million dollar question! This is not so easy because everybody would rather have 100% of all tests automated, but as mentioned above, this is not possible. But at the same time, we are not given infinite time and money to automate as much as we would like.

We typically follow a risk and value-based approach, mixed with a simple ROI assessment. First we prioritize all features by value, i.e., what features bring the most value to users most of the time; here, we also integrate usage frequency in the assessment. Features priority is then modulated based on the risk for our customer if the feature does not work properly (for example, we may estimate daily losses if the feature fails) and the existence of "workarounds" if the feature is not available. Sometimes we also integrate the probability of breaking a feature in the long term (regression); such probability is assessed based on business complexity and source code complexity, but also on the number of features that depend on the same code. Last but not least, we estimate the complexity of automating a test case (hence the effort needed for it) compared with executing it manually. It is not always easy to elaborate clear rules on how to select which test cases shall be automated and which shall not. More practically, we sometimes only dictate "guidelines".  In a SCRUM-based project we can take the decision collectively when the feature gets coded, or just before during SCRUM "Sprint Planning 1" meeting and in agreement with the Product Owner.

 

What is the sustainability of testing strategies for evolving applications?

BA - We already discussed the need to choose the scenarios to check automatically because of the cost in time and effort involved. It would be worrying if, whenever the application changed through its normal evolution, this effort of automation would need to be repeated. In this case, it would be impossible to keep track with a quickly evolving piece of software. Fortunately, this is where the architectural design of the automation comes into play. By using a layered pattern, it is possible to isolate potentially brittle, technical parts from the high-level description of test scenarios. Thanks to this, test maintenance is manageable, and scenarios become a clear and descriptive documentation of functionalities.
For instance, a test scenario may use keywords such as "Add a new user", or "Upload a document", in order to describe in terms of functionalities what is performed. All the actual implementation (the part most sensitive to application evolution) will be done inside these keywords.
The advantage of having these stable, automated scenarios that read like documentation is that they will resist a refactoring, or a large change to UI (User Interface). Since they verify the functionalities for the end-user, they ensure their non-regression, and bring confidence to the development team that the changes in the background do not affect the foreground. Today, refactoring in depth the application to meet new demands from the market is a frequent challenge, whether it is the redesign of the underlying models to improve performance, or introducing microservices to scale the team, externalizing a part of the application, or relying on external data-sources. Having a robust, maintainable automating part of testing creates a quality safeguard, which frees up resources to accomplish these new goals with agility and confidence.

By continuing to browse this site, you accept the use of cookies or similar technologies whose purpose is to produce statistics on visits to our site (tests and measurement of visitor numbers, visit frequency, page views and performance) and to offer you content and promotions which will be of interest to you.

Our cookie policy has been updated. Please feel free to manage your preferences.

close
save

Manage your cookie preferences

Update your cookie preferences

Find out about the type of cookies stored on your device, accept or block them for the entire site, all services or on a service-by-service basis.

OK, accept all

Disable all

Visitor flow

These cookies provide us with insight into traffic sources and allow us to better understand our visitors anonymously.

(Google Analytics and CrazyEgg)

New

Sharing tool

Social media cookies allow content sharing on your preferred networks.

(ShareThis)

New

Visitor understanding

These cookies are used to track visitors across websites.

The intention is to enable us to offer more relevant, targeted content to existing contacts (ClickDimensions) and display ads that are relevant and engaging for users (Facebook Pixels).

 

New
For more information about these cookies and our cookie policy, click here