Multicloud architecture decomposition simplified


Architectures are like opinions; everyone has one that’s based on their own biases. Sometimes it’s a dedication to using only open source solutions, a specific brand of public cloud, relational databases, you name it. These biases are often the driving factors that determine what solution you employ and how bad or good those choices are. 

The issue is that when you choose components or technology based on a bias, often you don’t consider technology that’s better able to meet the core requirements of the business. This leads to an architecture that may approach but never get to 100% optimization.

Optimization means that costs are kept at a minimum and efficiency is kept at a maximum. You can give 10 cloud architects the same problems to solve and get 10 very different solutions with prices that vary by many millions of dollars a year.

The problem is that all 10 solutions will work—sort of. You can mask an underoptimized architecture by tossing money at it in the form of layers of technology to remediate performance, resiliency, security, etc. All these layers add as much as 10 times the cost compared to a multicloud architecture that is already optimized.

How do you build an optimized multicloud architecture? Multicloud architecture decomposition is the best approach. It’s really an old trick for a new problem: Decompose all proposed solutions to a functional primitive and evaluate each on its own merits to see if the core component is optimal.

For example, don’t just look at a proposed database service, look at the components of that database service, such as data governance, data security, data recovery, I/O, caching, rollback, etc. Make sure that not only is the database a good choice, but the subsystems are as well. Sometimes third-party products may be better.

Copyright © 2021 IDG Communications, Inc.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here