top of page
Search

Cloud Call Center Architecture & Practice on AWS

Updated: Mar 17, 2020

How can call centers take full advantage of cloud computing?

Which architecture is used to meet the challenges of rapid business growth?

What is the most important step in building a native cloud architecture?

The ever-expanding business scale and rapidly growing number of customers have urged corporate call centers to increase the availability of large-scale agents. The need for unified management of self-built and outsourced workplaces, as well as issues such as outbound calls and quality inspection efficiency, have brought challenges to call centers. The traditional model is unsustainable.

The rise of new technologies such as cloud computing and artificial intelligence has pushed the call center industry into a new era, and traditional call centers characterized by hardware integration have ushered in opportunities for change. The multi-tenant technology has effectively reduced the one-time construction cost and gave the call center better agent flexibility. The soft switching and CTI part cloudization have solved problems such as scalability, and now call center cloud services are gradually recognized and trusted by corporate customers.

How can call centers take full advantage of cloud computing? Which architecture is used to meet the challenges of rapid business growth? What is the most important step in building a native cloud architecture?


01 What is a native cloud architecture.

Some design concepts of traditional call center software architecture are contrary to the value of cloud computing. If the traditional call center software architecture continues to use the CPU serial number for license authorization, then when the underlying hardware of the cloud virtual environment changes, the call center will become unavailable.

The cloud-native architecture breaks the design concept of the traditional call center software architecture, and is consistent with the value and development of cloud computing. It adopts the architecture model of cloudized resources in terms of infrastructure, platform, and software. The cloud-native architecture uses mature, open, and open-source technologies in the cloud to make call centers based on the cloud-native architecture a growable system. The call center can grow together with the performance growth, function improvement, and availability improvement of cloud service components.


Figure 1 Call Center Evolution


Based on the native cloud architecture, call centers will become stronger and more efficient than ever before, which can effectively meet the demand for high availability of the platform, while having advantages in large capacity, high flexibility, scalability, and rapid deployment.

Every progress of the call center is closely related to technological progress. Cloud computing is the core technology of the call center cloud service. It gives the call center convenient and convenient use of shared computing resources, storage resources, and application functions anytime, anywhere, and effectively, and effectively solves the large capacity of the platform. The high availability, scalability, and other issues also make the call center have great advantages in terms of deployment speed, unified management, and cost management. At present, AsiaInfo CTI-Cloud, which adopts a native cloud architecture, successfully achieves concurrent login of 20,000 seats on a single platform, and the platform availability also reaches 99.99%.


02 The establishment and exploration of native cloud architecture.

AsiaInfo has been committed to providing enterprises with high-quality call center services with new models and new technologies. As the world's largest cloud service provider AWS China's advanced technology partner, AsiaInfo fully utilizes its cooperation with AWS to focus more on the evolution and application of call center cloud services based on cloud platforms.

Focus on user needs and stay one step ahead. "We used to discuss how to use S3 as an object database in the team, but S3 Select and Glacier Select went online soon," said Chief Architect of AsiaInfo. At that time, AWS Aurora Multi-Master happened to be released; when we tried to use Kaldi for speech recognition, Amazon Transcribe provided a hosted service. "The release of new AWS products is always one step ahead of users, and users can easily evolve products at their own pace .

Serverless architecture innovations allow developers to focus more. Lambda is the world's first commercial product based on serverless architecture (Serverless) launched by AWS in 2014, and has been highly respected since its launch. A large number of practical case sharing at re: Invent 2017 marks the maturity of this technology. In addition to the core Lambda and API Gateway of serverless architecture, Aurora, Baremetal, GPU, FPGA, IoT, etc. can all be found on AWS. Developers no longer need to focus on anything not related to the code, just focus on business logic, Just connect the service.

AsiaInfo CTI-Cloud's native cloud architecture is based on various mature cloud service components of AWS for rapid iteration and rapid evolution. The microservice architecture technology is used to cut the call center into 26 sub-modules. Each module uses cluster, dual- Technology such as live achieves large capacity, high flexibility, and high availability.



Figure 2 AsiaInfo CTI-Cloud's native cloud architecture


Among the AWS services that AsiaInfo is currently using (see Figure 3), the yellow dotted box is only used last year. For example, Lambda is a service that allows developers to run code without configuring or managing servers, which is used in real time. Process streaming data without running or charging, which can be used to reduce costs; Amazon DynamoDB is a fast and flexible NoSQL database service with great advantages in database management, performance, scalability, and reliability, suitable for any capacity Low-latency storage, applications that read data, etc.

In recent years, the application of artificial intelligence technology in the call center industry has become widespread, such as machine learning, speech recognition, NLP, and so on. Lex, Polly, Machine Learning and other AI-related services (see Figure 3) have also been included in our subsequent use plans. The application of these artificial intelligence technologies will bring a better experience and efficiency improvement to the call center. For example, Polly is a service that converts text to realistic voice, which can be used to replace the traditional cold and rigid robot voice of TTS.


03 Evolution and value of recording processing architecture.

What is the most important step and value of cloud-native architecture transformation? We can tell from the evolution of the recording processing architecture. The most important step of the recording processing architecture is the application of Auto Scaling and Lambda, so that the recording processing has obtained high availability, large capacity, high flexibility, and the recording will be lost, and the recording generation time and cost are greatly reduced.


In the first-generation recording processing architecture, the recording was generated from the sip-media module of the CTI-Cloud platform, transferred to the media-zip recording compression processing module for processing through the Elastic Load Balancer (ELB), and finally all wav files and processing After uploading the mp3 file to S3.

The biggest disadvantage of this architecture is its inflexibility. It needs to preset the number of media-zips. When there is a business peak, the recording processing delay will increase. If too many instances are preset, it will cause a great waste.


In the second-generation recording architecture, Auto Scaling is used to place the two modules of sip-media and media-zip in an elastic scaling group, which can flexibly scale with changes in business volume. However, this architecture also has two disadvantages: Because all recorded wav files pass through the elastic load balancer to the back-end media-zip, the traffic pressure of the elastic load balancer will be relatively large; the elastic scaling of media-zip is responsible for handling instance destruction If the instance is destroyed before the wav file is processed, the recording will be lost.


In the latest generation of serviceless-recording processing architecture, the EC2 instance elastic group is replaced with a Lambda function. The wav file is uploaded directly to the S3 bucket through sip-media, eliminating the need for intermediate elastic load balancing. On S3, Lambda is started by configuring events to drive. Lambda will obtain the wav file, convert it to mp3, and upload it to the S3 bucket.

This architecture has three advantages: faster flexibility, more reliable, and cheaper. The elastic response has been shortened from 5 minutes to 1 second, and the recording generation time has been reduced to less than 5 seconds; automatic exception handling, upload first, and post processing; Billing becomes billing in milliseconds.

58 views0 comments
bottom of page