
For high-performing software companies serving demanding industries like legal tech, every product release is a promise to move faster, deliver smarter features, and meet rising user expectations. But when testing cycles start feeling like bottlenecks, that promise gets harder to keep.
If you've ever felt the friction of slow QA cycles, late-stage bug discoveries, or expensive tools that underdeliver, you’re at the right place.
In this case study, we’ll explore how ThinkSys partnered with Centerbase, a legal practice management software provider, to improve their regression testing process. You’ll learn how modern automation strategies helped them cut testing time in half, uncover bugs earlier, and deliver with confidence, all within their budget.
Meet Centerbase
Centerbase is a leading legal practice management software provider built to help law firms operate more efficiently. Their all-in-one platform enables firms to:
- Manage matters, billing, and accounting
- Improve productivity for timekeepers
- Meet the expectations of modern clients
- Gain visibility into firm-wide performance with real-time analytics
From a client’s first contact to final invoicing, Centerbase supports every step of the legal workflow. But internally, their own development workflow was struggling with a lot of challenges.

- Manual Regression Testing Took Too Long: Each testing cycle stretched over 3–4 weeks, slowing down their release pipeline and making it difficult to maintain the pace required by modern product development.
- Product Delivery Was Slowed Down: These extended test cycles delayed shipping new features and updates, which directly impacted the client’s ability to respond quickly to customer needs and competitive pressures.
- Bugs Were Found Too Late: With manual testing happening so late in the cycle, many bugs slipped through until release prep, resulting in last-minute fixes and reduced confidence in build stability.
- Automation Tool Underperformed and Cost More: Centerbase had tried using an AI-based automation tool, but it delivered less than 10% value and was too expensive to justify its limited returns.
- QA Team Was Overburdened: The QA team spent significant time re-testing core workflows manually, limiting their bandwidth for exploratory testing or supporting faster development cycles.
Solution
After analyzing the challenges in Centerbase’s testing process, ThinkSys outlined a targeted plan to address each challenge with practical and cost-effective automation strategies:
- Replace Ineffective Tool with a Tailored PoC: We proposed building a custom Proof of Concept using Playwright with TypeScript. This would automate actual user workflows, unlike the previous tool, and demonstrate clear value before further investment.
- Focus Automation on High-Impact Areas: Rather than automate everything, we planned to prioritize critical user flows and core navigation paths, ensuring testing time is spent where it matters most, maximizing ROI from the start.
- Handle Sluggish Logins with Smart Caching: To overcome login-related delays, we suggested a cached login approach. This strategy would bypass repeated authentications across tests, saving time and boosting test stability from the start.
- Stabilize Tests with Smarter Locator Strategies: We advised implementing robust locator logic to prevent test failures caused by minor UI changes, improving long-term maintainability and reducing the need for constant test script rewrites.
- Build for Scale, Not Just Speed: Beyond test cases, we proposed a sustainable QA automation framework. This would allow Centerbase to expand coverage confidently over time, starting with adequate regression and scaling up as needed.
Results

- Regression Testing Time Cut in Half: Automation reduced the full regression cycle from 3–4 weeks to just 2 weeks, allowing the team to release updates faster without sacrificing quality or confidence in test coverage.
- Early Bug Detection Improved Release Readiness: Bugs that were previously discovered late in the cycle were now caught during release preparation, giving developers more time to fix issues and reducing the risk of post-release surprises.
- Significant Cost Savings from Smarter Automation: By moving away from an expensive, low-value tool, Centerbase saved costs while getting better results, freeing up resources for higher-impact QA and product development efforts.
- 30% of Regression Suite Now Fully Automated: Half of the regression test cases were automated with Playwright. This gave the QA team more breathing room to focus on exploratory testing and other high-priority tasks.
- Stronger Confidence in Long-Term Code Stability: With consistent automated coverage across key workflows, the team now had better visibility into application health, leading to more stable releases and fewer surprises in production.
Step by Step Implementation

- Step 1: Build a Targeted Proof of Concept
We began by developing a Proof of Concept (PoC) that automated over 300 end-to-end steps using Playwright and TypeScript. The goal was to validate real-world coverage, not just theoretical coverage. We chose high-impact workflows that our clients’ tool failed to handle. This stage helped us prove that our stack could handle dynamic UIs, complex navigations, and offer better long-term ROI. Importantly, it built early trust with the client team. - Step 2: Prioritize Test Case Selection Based on Business Value
Instead of blindly automating every scenario, we worked closely with Centerbase stakeholders to identify which user actions had the highest business impact. We focused on automating mission-critical regression cases. This decision ensured that automation delivered immediate value. We also considered test complexity and risk level to avoid time-consuming, unstable cases that didn’t justify the engineering effort early on. - Step 3: Implement Resilient Architecture for Speed and Stability
To address prior tool limitations and known app slowness, we introduced a cached login strategy to bypass repeated logins and improve execution speed. We also built intelligent wait mechanisms and reusable page objects to ensure test scripts were resilient against locator changes. Anticipating flaky test risks, we used conditional logic to handle optional UI states, making the framework more reliable in production-like environments. - Step 4: Integrate Automation into Existing QA Processes
We didn’t build automation in isolation. The test scripts and framework were integrated into Centerbase’s CI/CD pipeline and QA workflows to ensure they could be maintained by the in-house team. We trained the QA team on how to run, debug, and expand the automation suite. This step was key for adoption, scalability, and long-term success. We also ensured test reports were readable and actionable by non-engineers. - Step 5: Monitor, Scale, and Optimize Iteratively
Once 30% of the regression suite was automated, we closely monitored execution time, false positives, and stability over multiple sprints. Based on those insights, we fine-tuned locator strategies and execution paths. We used this data to define a roadmap for scaling test coverage beyond regression, into smoke and integration tests. Continuous feedback loops helped us evolve the framework into something Centerbase could rely on confidently.
Conclusion
You’ve read how ThinkSys helped Centerbase overcome slow, costly regression testing with a tailored automation approach that brought speed, accuracy, and long-term stability. If you're a fast-moving software company dealing with sluggish QA cycles, underperforming tools, or delayed releases, this story might sound familiar.
ThinkSys specializes in solving exactly these problems with scalable solutions built for your workflow. If you want to overcome any sort of QA challenges, let’s connect and talk about it.
