The Elastic infrastructure

Elastic Infrastructure vs Composable one at hardware level: here it is why A3Cube invented the elastic infrastructure.

The main difference between elastic infrastructure and composable one is managing the bandwidth across all the devices.

The composable infrastructure relies on PCIe fabric at the hardware level and uses a single PCIe adapter per node with a switch that connects all the nodes.

The significant advantage of this infrastructure is that you can reduce the data center's costs by sharing the hardware devices; the drawback is that you share the devices across multiple servers.

Sharing devices among multiple servers means sharing the bandwidth and a single device's performance among various systems. The composable infrastructure is suitable for traditional data centers, which offers services where version is not the first requirement, like virtual desktops, e-mails, back-office, and similar applications, including storing information, photos, and documents. Composable infrastructure is not good at all Data Analytics, ML, and AI applications.

A3Cube started to sell composable infrastructure based on PCIe fabric (RONNIEEE Express) since 2014, pioneering all the aspects of this incredible technology.

A3Cube developed lots of different software to extol the performance of the composable infrastructure from storage to GPUs.

A3Cube, after thousands of tests, discovered that the best performance and the best efficiency was not coming from the composability but the elasticity of the infrastructure.

These conclusions are the result of seven years of composable infrastructure testing and benchmarking.

In a nutshell, the concept is simple, here is an example that can clarify what I'm saying: imagine that you have a rack of servers. Each has a network card in it, now imagine that you use a composable infrastructure. You configure the composable to use a single network card in one server to share this device among all the other servers. The advantages in terms of cost are evident, but there is also a drawback, that in many modern use cases, it is more significant than the benefits.

You have a significant benefit because you use only one device to connect all the servers in a rack to the datacenter network.

Using a single network top of rack switch is possible to serve the n times the number o servers that you can do without the sharing of the network controller. It is math, so nothing to explain; it is evident to anyone!

It is also evident that it is a huge benefit, but, thinking well, it is also a dramatic minus for the datacenter architecture.

You cannot scale at the level of performance required by Machine Learning L and Data Analytics, and Deep learning applications. To be clear, it means no performance, and it is not a good thing.

If you look at the CPU and accelerator markets today,  you will see that all the vendors are looking at how to provide more bandwidth possible to their devices. AMD introduces 128 PCIe lanes gen 4, NVIDIA move to PCIe gen4, and Nvilink for GPU communication. System vendors understand that modern applications require moving a real tsunami of data across all the devices inside the server and all the servers inside a datacenter.

New processors are designed to have the maximum number of IO interfaces (PCIe), not to share the modern bandwidth applications require multiple network interfaces that run in parallel rather than transferring the one inside a server among other servers. Composable infrastructure is an excellent technical idea, but it is not useful for the real-world or better its is only helpful for the legacy type of applications and infrastructure.

It is not a minus, legacy applications exist, but you need to know that you cannot use them for Data Analytics, Machine Learning, and Deep Learning applications.

It is not a good idea to create a new infrastructure that shares the bandwidth; it not what modern applications need today. This approach is suitable only for legacy applications (maybe!).

If you want to save, this is not the right approach. At A3Cube, we are using different types of composable infrastructure since 2014. We know very well that this kind of approach is not suitable for many modern applications related to Data Analytics, Machine Learning, and Artificial intelligence.

For all these reasons, A3Cube changed direction from composable infrastructure to the new concept of elastic infrastructure. The elastic infrastructure relies on using many identical systems to achieve the maximum performance and efficiency for the applications.Using identical system blocks (e.g., servers with optimized configurations or blocks of pooled GPUs) as building blocks can elastically organize a computing infrastructure to match the application requirement.

Organizing a system that can give the maximum performance for a specific application and then reorganize it for another, always giving the best performance is the real key for modern data center infrastructure.

Maximum performance on every component permits achieving a global efficiency reducing the number of overall features needed to achieve a certain level of performance, obtaining the same results of the composable infrastructure without the related drawbacks.

Here is why the future is elastic, not composable!

Respect for your privacy is our priority

We use cookies to ensure that we give you the best experience on our website. Accept and continue to consent to the use of all cookies. If you want to learn more or give consent only to certain uses click here. You can consult our updated Privacy Policy and Cookie Policy at any time.