move elb docs to proper location Reviewed-by: OpenTelekomCloud Bot <None>
5.2 KiB
Product Advantages
Dedicated Load Balancers
- Robust performance
Each dedicated load balancer has exclusive use of isolated underlying resources and can provide guaranteed performance. A single dedicated load balancer deployed in one AZ can establish up to 20 million concurrent connections, and a load balancer deployed across two AZs can establish up to 40 million concurrent connections, meeting your requirements for handling a massive number of requests.
- High availability
Underlying resources are deployed in clusters to ensure that load balancers can route traffic uninterruptedly. If your servers in one AZ are unhealthy, dedicated load balancers automatically route traffic to healthy servers in other AZs. Dedicated load balancers provide a comprehensive health check mechanism to ensure that incoming traffic is routed to only healthy backend servers, improving the availability of your applications.
- Ultra security
Dedicated load balancers also allow you to select security policies that fit your security requirements.
- Multiple protocols
Dedicated load balancers support the following protocols, including TCP, UDP, HTTP, and HTTPS, so that they can route requests from different types of applications.
- Hybrid load balancing
Dedicated load balancers can route requests to both servers on the cloud and on premises, allowing you to leverage the public cloud to handle burst traffic.
- Ease-of-use
Dedicated load balancers provide a diverse set of algorithms that allow you to configure different traffic routing policies to meet your requirements while keeping deployments simple.
- High reliability
Load balancers can be deployed across AZs and can distribute traffic more evenly.
Shared Load Balancers
- Robust performance
A shared load balancer can establish up to 100 million concurrent connections and up to 1 million new connections per second, and can handle up to 1 million requests per second, meeting your requirements for handling huge numbers of concurrent requests.
- High availability
Underlying resources are deployed in clusters to ensure that load balancers can route traffic uninterruptedly. If your servers in one AZ are unhealthy, shared load balancers automatically route traffic to healthy servers in other AZs. Shared load balancers provide a comprehensive health check mechanism to ensure that incoming traffic is routed to only healthy backend servers, improving the availability of your applications.
- Multiple protocols
Shared load balancers support the following protocols, including TCP, UDP, HTTP, and HTTPS.
- Ease-of-use
Shared load balancers provide a diverse set of algorithms that allow you to configure different traffic routing policies to meet your requirements while keeping deployments simple.
- High reliability
Shared load balancers can be deployed across AZs and can distribute traffic more evenly.