Database Architecture Maximizing Performance and High Availability
while Reducing Cost

Currently, Database Architecture is a constantly discussed theme, companies are going further with analysis to find the best solution to maintain stability while increasing return on investment. One of the most popular solutions right now is migrating to cloud architecture, it has been a good solution in terms of reducing costs. However, there are different ways to reduce cost e.g.: unifying and consolidating the On-Premise. Despite current trends towards cloud solutions, by this approach, business can achieve the same benefits.

I’ve been working as a Database Specialist Consultant for my client. They used to use expensive infrastructures and licenses annually for all their environments.

Previously, the servers were running on SQL 2005 to 2012 versions. These servers never had failover contingencies in cluster. This meant in times of failure the data would not be available. The former method would never guarantee integrity of data.

In high traffic periods, the previous architecture had a risk of failure. The lack of security, integrity, stability and availability could cost the company money.

I’ve been working for an NZ wide company that has more than fifty years of experience, the marketing department has been working hard to maintain the company’s reputation. The public’s view could be negatively impacted by any server failures.

I had raised awareness about the necessity to improve the Database Architecture environments to the company.

The update would improve the data integrity, security and availability, making the development much more productive with new features. Also, enhancing the data warehouse with the new Reporting Servers. This would include the new engine making all the T-SQL transactions four time faster than the previous version.



To move it forward, I developed scenarios to explain how the new environment would work applying the SQL Cluster to sort out the unavailability.

Changing the paradigm would take time. My CIO (Chief Information Officer) already had a bad experience before that caused money loss due to the unavailability. This project enhancement would make the environment better and more robust.

High availability in cluster involves high investment in software and infrastructure. Fortunately, the company had already invested in software and hardware required for this kind of solution, e.g. Enterprise Licenses for SQL Server and a SAN (Storage) for this Infrastructure. Despite owning these, they didn’t know how to use it effectively.

The budget was another factor that would impact on the project approval. Five SQL Servers running for all the databases would significantly increase the cost of CORE licenses.

My CIO had a target to reduce the budget. Keeping this in mind, after researching several performance tests, we found a suitable architecture to build an environment using fewer COREs. Consequently, this reduced the number of licenses while maintaining satisfactory performance.

Microsoft had improved the new release making SQL, Databricks, SnowFlake much faster and more reliable than the previous versions. Therefore, we could reduce the architecture making it smaller and more efficient with the new SQL version.

The confirmation came after stressing the SQL environments, which proved through the transaction counter metrics as positive and stable.


Further analysing all the current SQL environments, I came across a problematic architecture using multiple SQL instances. I could find four SQL instances running in only one server. As a consultant, I needed to advise the stakeholders about the potential consequences and their possible solutions.

Once we had decided to not use multiple instances anymore, all the applications had been reallocated to only single SQL instance per server.

Diagrams for all the environments had been built, simulating the overloading to guarantee the functional performance and loading balance, including the data warehouse and operational applications.

The high availability in Cluster covered the major and relevant applications in two groups to work active – active. It saved resources and reduced cost.

For better integration, all the IT department and stakeholders had been involved in the project for discussion and decisions.

The project was developed following the best project management approaches and practices. We successfully achieved the goals by using tools like: Trello, Confluence, HammerDB and Power BI (for Performance data analysis).

All the roadmap project steps had been defined, including the timeframe for each task and the associated executors. Each environment had a roadmap for execution and follow up.

The project took around six months to complete for all the environments and we successfully migrated to production overnight.

We initially suspected that the new reduced architecture wouldn’t be able to handle the heavy transactions for all the environments in production. Therefore, after months of working on the new solution, analysing its performance we proved the performance was stable and safe.

Since the first day operating with the environments in SQL, Databricks, SnowFlake , I’ve been collecting all the performance indicators for deep data analysis and monitoring.

We have been using PRTG for historical data analysis and Power BI in real-time monitoring. These kinds of tools, make the monitoring and management much easier.

I’ve set up alerts, defined thresholds, built reports for evaluation analysis and thus far, the performance has been great!

My client is very happy with the new solution that reduced the cost and increased availability. The project was definitely successful.

We are now preparing for the new releases and future applications, however, technology never stops improving.