Leveraging Data to Enhance Quality Assistance: A Case Study

Jump to

In the fast-paced world of software development, ensuring product quality while meeting tight deadlines is a constant challenge. One company that has successfully tackled this issue is Canva, a popular graphic design platform. By implementing a data-driven approach to their Quality Assistance (QA) model, Canva has optimized their testing processes and improved overall product quality.

The Shift-Left Approach

Canva’s quality assistance model is built on the principle of “shifting left.” This approach involves integrating QA engineers into every stage of the development process, from initial planning to final implementation. By collaborating with product managers, designers, developers, and data analysts from the outset, QA engineers help identify potential risks and edge cases early on, preventing errors before they become costly issues.

Challenges in Quality Assurance

Despite the benefits of the shift-left approach, Canva faced a common problem in the software industry: how to prioritize testing efforts when multiple projects are vying for attention with similar deadlines. This challenge can lead to engineers and QA professionals spreading themselves too thin, attempting to test everything without a clear focus on the most critical areas.

Data-Driven Decision Making

To address this issue, Canva adopted a data-driven approach to quality assistance. By basing decisions on concrete data, the company could focus its testing efforts on the most frequently used features. This strategy ensures that even if users encounter an error, it’s less likely to be in a critical area that would significantly impact their experience.

Key Strategies for Optimization

Code Coverage Analysis

Canva’s development team utilizes a monorepo structure, with ownership of different code areas clearly defined. By analyzing unit test code coverage, the QA team identified areas with limited automated testing, allowing them to prioritize these sections for additional attention.

Understanding User Behavior

QA professionals at Canva recognize that a high-quality product isn’t just bug-free; it must also be intuitive and easy to use. By analyzing user behavior and preferences, the team can identify common usage patterns and potential friction points. This information helps prioritize testing efforts and informs decisions about feature improvements.

Measuring and Celebrating Progress

To maintain momentum and motivation, Canva’s team set achievable goals for key metrics such as the number of engineering foundation issues, code coverage, and test quantity. Regular reporting on these metrics helped track progress and celebrate small wins, which compound over time to create significant improvements.

Results and Lessons Learned

The implementation of this data-driven quality assistance model has yielded impressive results for Canva:

  1. Improved test coverage and reduced code rework
  2. Increased confidence in releasing new features
  3. Zero incidents across three teams over a three-month period
  4. More accurate bug prioritization, reducing alert fatigue
  5. Better understanding of code health among team members
  6. Quantifiable improvements driven by a bottom-up approach
  7. Reduced risk in feature releases
  8. Enhanced user experience due to fewer bugs in production

Key Takeaways

  1. Pragmatic Approach to Code Coverage: While 100% unit test coverage is ideal, Canva learned that there’s no magic number for optimal coverage. The team supplements unit tests with other QA activities like kickoffs, testing parties, and design reviews.
  2. Data-Driven Bug Prioritization: By considering user impact when prioritizing bugs, the team can allocate resources more effectively and reduce alert fatigue.
  3. Objective Metrics for Technical Debt: Having clear, measurable goals allows developers to identify and address technical debt proactively.
  4. Balancing New Features and Quality: Objective measures help teams align on goals that balance shipping new features with maintaining product quality.

Conclusion

Canva’s success in optimizing its quality assistance model demonstrates the power of data-driven decision-making in software development. By focusing on user behavior, code coverage, and measurable improvements, the company has created a more efficient and effective QA process. This approach not only reduces the risk of shipping bugs but also enhances the overall user experience by ensuring that the most critical and frequently used features receive the attention they deserve.

As software development continues to evolve, other companies can learn from Canva’s example. By implementing similar data-driven strategies, development teams can optimize their quality assurance processes, leading to more robust products and happier users.

Read more such articles from our Newsletter here.

Leave a Comment

Your email address will not be published. Required fields are marked *

You may also like

QA leaders reviewing a dashboard that compares multiple AI-powered test automation tools with metrics for flakiness, coverage, and maintenance effort

The Third Wave of AI Test Automation in 2025

The industry has moved from proprietary, vendor-locked tools to open source frameworks, and now into a third wave where AI sits at the center of test design, execution, and maintenance.

QA engineers collaborating around dashboards that show automated test results, quality metrics, and CI/CD pipeline status for a modern software product

Modern Principles of Software Testing in 2025

High-performing teams no longer treat testing as a final phase; it is embedded throughout the SDLC to ensure software is functional, secure, and user-centric. By mixing different test types and

QA engineers reviewing a dashboard where autonomous AI testing agents visualize risk-based test coverage and real-time defect insights

The Rise of Autonomous Testing Agents

Modern software teams ship faster than ever, but traditional testing approaches cannot keep pace with compressed release cycles and growing application complexity. Manual testing does not scale, and script-based automation

Categories
Interested in working with Newsletters, Quality Assurance (QA) ?

These roles are hiring now.

Loading jobs...
Scroll to Top