Chat iconGet in touch

The evolution of event processing for telecommunications operators

By June 24, 2014 No Comments

This has included both real-time event processing for triggering actions requiring immediate response time, and near real-time (e.g., 15 minutes batches) for dashboards and downstream OSS/BSS systems desiring timely information where up to the second, or millisecond, latency is not a factor in contrast with traditional batch based billing systems supporting daily processing.

Today, the industry requires transactional event processing more than ever before.  However, the use cases for these initiatives have evolved, as has the technology for these solutions.  Where billing mediation could be satisfied through batch based activities involving relational databases and file distribution, the new use cases for transactional event processing are tailored towards video analytics, broadband analytics, and predictive analytics for network operations.

New technologies are required to satisfy these evolving requirements.  Structured files are no longer sufficient, as Big Data technologies have supplanted these in their ability to ingest and manage social media alongside network usage and customer information.  Similarly, relational databases are ill suited for the type of event processing now required.  A multiple database architecture has evolved, where specialized database architectures are fit for purpose for activities involved in ingesting, staging, and storing the data for analytics and long term purposes. 

The initial processing of complex disparate data sets remains perhaps the greatest challenge, where OpenetDB excels as a high performance in-memory database used for data storage functions needed while collecting, validating, and standardizing complex data forms.  Other databases are better suited for subsequent staging, analytics and long term storage, but this initial action is the primary function driving Openet’s new business, to deliver a wide array of timely events for video analytics, broadband analytics, and predictive analytics for network operations.  OpenetDB is a major component positioning Openet towards the delivery of Big Data projects, with the ability to ingest and process both structured and unstructured data, pre-processing enormous volumes to enable validation and standardization of useful data sets, while applying appropriate measures to account for personally identifiable information and data governance standards.

Likewise, the industry is evolving dramatically towards a new architecture for policy and charging, Openet’s cornerstone capabilities involving transactional requests and responses, with configurable business rules to enable a wide variety of industry leading use cases. 

As a software vendor delivering solutions on COTS hardware for these functions, Openet has embraced the transformational changes involved with Software Defined Networks (SDN) and Network Function Virtualization (NFV).  OpenetDB is ideally suited for these deployments, as unlike legacy databases it can scale elastically and is not tied to physical hardware concepts such as the number of CPUs or servers.

In addition, a critical challenge involved in enabling SDN/NFV is to ensure full availability of stateful data used in policy and charging.  Both policy and charging involve interactions with networks that cannot allow for down-time, as these solutions are in the control path for services. 

Traditional methods of supporting high availability for software, including Oracle RAC and replication of data, are insufficient and ill-purposed for virtualized environments in general, and specifically with SDN and NFV.  Therefore, a new technology is needed to provide reliable stateful data involved with these transactions, both in and across data centers.  The solution to this problem is achieved with OpenetDB, with a critical feature called k-safety that enables synchronous partition replication within the database cluster.  Through k-safety, OpenetDB creates and maintains additional copies of each partition, carefully distributed among the nodes in the cluster to enable full availability even when some of the nodes in the cluster fail.  This new approach enables high performance, low latency transactions, in virtualized environments involving a hypervisor for scale, managed and centrally orchestrated as demanded in a software defined network, without the need for legacy technologies such as database replication.

In the fifteen years of Openet’s existence, the industry has never been as transformational in the areas of Openet’s core competencies – namely passive event processing and dynamic event processing.  While requiring change throughout the operators’ infrastructure, this nonetheless creates a massive opportunity.  There will be orders of magnitude more data processed, in new ways, leveraging Big Data techniques and requiring an efficient transactional solution to harness data from myriad sources into analytics platforms.  At the same time, there is a radical transformation of networks into software oriented platforms, harnessing virtualized rapid request/response solutions that can be easily deployed, configured, and managed, with redundancy for full availability in this new architecture.  OpenetDB underpins an essential leap forward for Openet in its delivery of this transformation.  As the preeminent provider of solutions in both these areas, Openet is helping to pave the way on new business initiatives spearheaded by the industry’s need for change.