In the beginning of June, I did a talk at Bynet Expo titled “Powering the Agile Operator.” Part of the talk was about how NFV could help power the agile operator. As most readers know, NFV promises lower CAPEX, lower OPEX, and faster rollout of services. In other words, agility. As such, NFV is a key component of the “agile” strategy.
But why and how? To me, NFV is about a service provider obtaining the best of breed VNFs (Virtual Network Functions – basically software-based functionality that the network requires to do its job) for what the service provider is trying to do, and putting them together in their network. It borrows concepts from the IT domain where you get the best functionality and have them all work together. This best of breed software approach is new for service providers and will be a game changer in the ability to roll out services. Lower CAPEX and lower OPEX are certainly byproducts of that.
As such, the service provider is not tied to a single supplier, which means they’re not tied to their schedule, or at the whims of their fees for doing special upgrades. The service provider will have software-based choices. The service provider can put the network they want together from the VNFs. Of course, there is work to do regarding interoperability, but that will come.
The fact that software can work on existing hardware, software VNFs can be spun up and down as network needs dictate, and that this software can be developed independently from the hardware is a huge step forward in network agility.
Publish Date: August 2, 2016 5:00 AM
Network Functions Virtualization is more than just moving network infrastructure functionality to a virtualized environment. It’s about orchestrating, delivering, and managing end-to-end services by chaining together software-based network functions, and doing this in an automated and programmatic fashion.
The NFV list of benefits definitely includes lower CAPEX and OPEX, but more than that, NFV allows operators to be more flexible. It does this by enabling a framework for operators to build networks and roll out services at software speed. It also allows them to automate many tasks that in a traditional network architecture - with elements running on proprietary hardware - cost prohibitive and not feasible.
This infographic highlights some of the areas within Cloud/NFV environments where automation can play a role. For example:
So take a look at ways that NFV is defining standards within data center cloud environments to automate the network infrastructure cloud.
Download the PDF Version
Publish Date: August 1, 2016 5:00 AM
With legacy circuit-switched equipment rapidly reaching retirement age, telecom service providers around the globe are focused on evolving to all-IP networks utilizing SIP and Diameter signaling, and additionally, on migrating to virtualized and cloud-based platforms. With compelling advantages that range from opex and capex cost savings to multimedia applications and service portability, all-IP Next Gen and IMS networks certainly do represent a big leap forward from the installed base of circuit-switched networks that rely on Signaling System 7 (SS7) protocols and Signaling Transfer Point (STP) switches for end-to-end connectivity.
However, despite the many compelling advantages of all-IP broadband networks, it would be premature to declare SS7 and STP’s “dead”. In fact, SS7 signaling is still very much alive and kicking. So the question is: why does SS7 live on and remain relevant today? Well, one reason for SS7’s continued vitality is that it serves as a central nervous system for the PSTN and PLMN, and can therefore be relied upon to guarantee global connectivity for both voice and SMS services as the world gradually transitions to all-IP networks over the coming decade. A second reason for SS7’s on-going relevance can be attributed to the success of mobile Short Messaging Service (SMS) and the fact that it continues to live on despite fierce competition from OTT Instant Messaging alternatives. In fact, SMS text messaging usage has actually been aided in recent years by its integration into compelling new applications such as Visual IVR, mobile marketing, and mobile payments. Clearly, with demand for these sort of applications on the rise, network operators will want to ensure that their SS7 and STP infrastructures remain fully supported well into the foreseeable future.
While the SS7 protocol itself is unquestionable robust and readily able to address most of today's networking challenges, in too many cases the same cannot be said about the associated STP switching infrastructure, much of which tends to be extremely dated and deployed on proprietary TDM hardware platforms. Unavoidably, end-of-life STP switches supporting E1 and T1 spans today will likely require appliance-based replacement solutions; however, there are also STP’s dedicated to supporting SS7 over IP-based facilities (SS7 over IP is called SIGTRAN), and these particular STP’s could be candidates for virtualized STP solutions that offer opex and capex savings along with improved service agility.
For operators around the globe, legacy SS7 services are neither dying out nor being rendered defunct by IP messaging applications. Moreover, with the number of global STP suppliers now dwindling, the question being pondered by many network operators is not “when will SS7 be dead?”, but rather, “when should I replace my unsupported STP’s?”
With a focus on solving today’s challenges, Dialogic offers a suite of both virtualized and appliance-based signaling solutions. If an STP replacement might be in your future, we invite you to learn more by downloading the Dialogic DSI STP datasheet.
Publish Date: July 29, 2016 5:00 AM
You may not have, as I did, the luxury of choosing the array of hardware – rack-mount servers (RMS) or Blade servers - to be used for your OPNFV implementation. I’m going to assume you’re not rolling in a rack full of blades, fiber switch fabric, and storage arrays, but have to make do with what you hope are adequate 1U or 2U machines. I’ll mention later some of the interesting twists encountered with a blade setup, but let’s start with RMS’s in the blog.
First, how many do you really need for a minimal but usable deployment? Yes, it is possible to do everything on a single large system by putting the various OpenStack nodes on virtual machines (often Oracle VBox). But, this leads to extra network configuration as what are normally physical NICs will now be virtual NICs. And, worse yet, the machine that will run your guest VMs will itself be a virtual machine. This added layer of virtualization will not be a pleasant thing.
In my mind, a set of 4 bare metal systems is needed for a minimal OPNFV deployment. This will give you a Fuel master node to deploy from and a place to park all of the necessary roles required for OPNFV. What you will lose is some speed because you have jammed several functions on the same system, and you will sacrifice redundancy/high availability. In addition, this configuration only provides a single compute node. So, this node should at least be a sizable system. My compute has 8 CPUs and 32 GB of memory – enough to start up at least a handful of guest VMs.
Let’s look at the overall picture of the deployment. Note that this shows only the RMS side of things, although the virtual Fuel machine is there.
Here are some important points:
If you are not huddled next to your stack of systems, you will likely wish you had viable remote consoles. Unless remote management capabilities are built into the server, this would be through an Ethernet-enabled KVM switch. You will probably need to change BIOS settings, check the progress of PXE boots and get to the systems when networked ssh is not functioning for reasons unknown.
Let’s move on to the Fuel setup. As a quick refresher, Fuel is an installer for your OpenStack/OPNFV environment that provides a GUI to lead you through the OpenStack configuration (If you need a refresher, here’s the link to an earlier blog). As I have mentioned, the OPNFV Fuel Installation Guide is quite comprehensive, laid out in cookbook fashion so that it can be followed from start to finish. So, nothing will be gained by rehashing it. What I will do is point out some “gotchas” that got me in the hope that they won’t get you.
Fuel setup tips:
As with the Fuel installation, the OPNFV Fuel Installation Guide does a good job of laying out the steps of setting up for and doing the OpenStack deployment using the Fuel GUI. However, there are some traps you can possibly fall into along the way. My next blog entry on my OPNFV journey continues with exposing some of those traps as well as highlighting some additional points to think about in deploying your OPNFV platform.
Publish Date: July 28, 2016 5:00 AM
This infographic is your GPS to help you map out the route to an evolved IPX on which to offer revenue-ready and customized voice and video applications along with advanced interconnection capabilities. Wholesale carriers and IPX operators can leverage their position as a trusted intermediary with MNOs and fixed line operators to provide secure any-to-any mediation and transcoding services for enabling VoLTE/IMS connectivity.
Download the PDF Version
Publish Date: July 28, 2016 5:00 AM
In an IoT world, you’ll have a connected car. Your connected car has all kinds of sensors to detect when to brake, how your car is performing, etc., and a connected speaker that will communicate with you. If your car stops suddenly and no brakes were applied, it might mean you got into a crash. If you have the right app, you might find your car talking to you to see if you are OK. Your talking car will most likely be your vehicle assistance company or an emergency services drone hovering above connected to your car speaker. All the wearables you have on you will also be measuring and probing your health. Something might indicate to the right application that you have some kind of problem. And if you did, you might also hear someone talking to you asking if you need assistance. Video would be involved as well because cameras will be everywhere. This is an example of a more complicated instance. But IoT will be used in very simple functions as well.
If a sensor picked up some anomaly, such as low water pressure, or high isolated temperature, someone could look at a camera first to see if there is an actual issue. This would all save time and expenses. These are some examples I can think of for the marriage of IoT and voice communications, but I’m sure there are literally thousands, if not hundreds of thousands more use cases involving voice and video.
These examples show that the communications industry can and will play in the IoT market with the marriage of real time communications and IoT. It will just be different than what built all of our companies today. The main app will not be person-to-person communication like we are used to. The value of IoT to our companies is tremendous, and because of IoT, the voice/video/messaging part we know well will be a part of a larger application story.
Publish Date: July 19, 2016 5:00 AM
Before we can look at the components of OPNFV, let’s start back with a definition from my last blog. “OPNFV is basically an Openstack deployment framework, with emphasis on the networking side.“ This means that we have at least 3 pieces here – Openstack itself, a way to deploy it, and some networking “extras.” That might lead you to ask – “Well, why don’t I just use Openstack, if that’s at the core of things?”
You could, but remember my warning about re-inventing the wheel. Wouldn’t it be way better to get everything you need to deploy an Openstack cloud in a single package, with good instructions, and a community willing to answer the inevitable questions? Oh, yes it would…
Let’s look at what we will be working with. For those who are not used to dealing with the innards of a cloud, it will help to think in terms of “layers.” Some of these layers are real, and some are virtual. Some allow you to install, deploy, and administer Openstack, and some do the actual work that everything else is there to support – running your application in a virtualized networking environment.
It’s probably best to start by reviewing the layers, at a high level, in the order in which you will be dealing with them. Here they are:
The Openstack deployment controller or master node - It is from here that all good things will spring. Once installed, the master will control how your Openstack environment will both be initially configured, and, how it will be enhanced as time goes on. As more users are added, they will need more image storage and more compute nodes on which to do their work. A deployment controller will allow these additions to be easily done.
The Openstack nodes themselves - There are many different types of nodes that can come into play in Openstack, but not all are needed in every deployment. With OPNFV, there are a set of core nodes that are needed, and a few others that may be deployed if desired. Narrowing down the possibilities helps to alleviate the information overload that comes with figuring out how you want to use something new and complex.
The guest virtual machines running in the Openstack environment. These would be the reason for doing all this in the first place – a set of virtual machines that may be assigned to different users, brought up and down at will, and put into a multitude of virtual network configurations.
Now we can get into specifics. In doing so, I will be telling you what worked for me, and why I made the choices I did. They may not be exactly what you want or need. But, hey, it’s my blog…
There are no less than 4 different installers available with OPNFV. After some experimenting, I chose Fuel. Here are my reasons:
I previously mentioned that I have set up and plan to maintain two separate OPNFV deployments – one for experimental and development purposes, and a second for internal QA, testing, and demos. One is a set of rack mount servers, the other an HPE Bladecenter. As these are two unlike environments, I chose to use two separate installations of Fuel to deploy the two separate OPNFVs. But I was able to save the price of a system by doing the two Fuel master nodes as virtual machines. Overall, Fuel does a lot of sitting around. It’s only hard at work when called on to deploy Openstack. So, the two virtual Fuel nodes on a single modest CentOS 7 host had more than enough resources when each needed them.
Let’s now take a look at the Openstack nodes and roles that are part of the OPNFV deployment. The “nodes” here are the physical machines themselves, while the “roles” are the necessary Openstack services or functions that must be present to get to a working deployment. And to make matters more interesting, there are many possibilities for assigning roles to nodes.
Openstack Roles – there is a fairly long list of roles shown by Fuel, but only some are relevant for OPNFV. These most important of these are:
Now, how does OPNFV suggest deploying the system? It’s not always clear – how many boxes do you really need, how are functions best divided up among them? Often, you will want to know the minimum needed, as few people seem to have closets full of up-to-date servers with good virtualization support. Things that need to be taken into account to arrive at the ”right” (for you, anyway) answers include:
I realize I’ve given you no hard recommendations here yet. Things that worked well for me will be revealed when I get into more of the details in subsequent blogs. The next blog entry will be relatively short – how to get prepared before actually installing OPNFV. Then we’ll get down to the meat of things.
This post is a part of the "OPNFV Demystified" blog series. The next part of the blog series will be posted on Thursday morning each week. Check out the intro post, if you missed it.
Publish Date: July 14, 2016 5:00 AM
In March, Chetan Sharma released a 2015 US Mobile Market update report. There are many interesting points in this report, but one thing I want to talk about today is that voice revenues declined by 24% and messaging revenues declined by 18%. Wow, that’s a lot. And since the overall mobile market increased in the US, that means data service revenue is increasing quite a bit. Is that a concern or not?
Years ago, clearly this was a concern. No one wanted to just be a bit pipe. It’s pretty well understood and accepted now that in order for that not to be the case, then value added services on top of the data pipe will need to be utilized. So the concept of the connected car, smart city, smart home, etc. is where the future growth is. All of these concepts require data and are value added services on top of it, but do not utilize messaging or voice.
But there are huge opportunities for messaging or voice innovation here on top of these new services. The application or service doesn’t have to revolve around voice or text as in the past, but voice, video or text can be an integral / optional part of one of the new IoT type of services. That means WebRTC and media servers can play a huge role in some of these new innovative services going forward.
Next week I’ll start a 3 part blog series about IoT and real-time communications to explore this concept a bit more.
Publish Date: June 28, 2016 5:00 AM
SaaS, IaaS, PaaS, NaaS, MaaS, UCaaS are all examples of XaaS. According to TechTarget, XaaS is a collective term said to stand for “anything as a service.” The acronym refers to an increasing number of services that are delivered over the Internet, on-demand, and on a subscription basis. XaaS is the essence of cloud computing. For the readers of this blog, especially network operators, UcaaS – Unified Communications as a Service – should be the acronym of interest, especially since it is forecasted to be a $37.85 billion dollar market by 2022 according to Transparency Market Research.
No need to dwell on the well-known fact that network operators are losing ground to services like OTT. Let’s focus on the positive and point out that network operators are in an ideal position to offer UcaaS because of their infrastructure – Hosted PBX and SIP Trucking to their business customers, and basic/advanced Class 5 services to their residential customers. Not to mention more revenue-generating services such as hosted contact centers, network auto attendant, corporate ACD, outbound SMS, and calling, WebRTC video conferencing, and call interception, just to name a few.
An old article I recently re-read covered a lot of the benefits of UcaaS succinctly. According to the article, UcaaSoffers flexibility and expandability that small and medium-sized business could not otherwise afford, allowing for the addition of devices, modes or coverage on demand. The network capacity and feature set can be changed from day to day if necessary so that functionality keeps pace with demand and resources are not wasted. There is no risk of the system becoming obsolete and requiring periodic major upgrades or replacement.
Let’s not forget a single bill that consolidates telecoms services.
Off-loading communications needs from on-premise to the cloud is a big step for any enterprise, so once onboard, network operators have a captive audience which must continuously be offered new services. This in turn directly affects ARPU positively.
What makes UcaaS such a great business proposition for Network Operators is that the infrastructure is already in place. Mostly what is needed now are the applications. Once again… network operators are ideally positioned to advantage of this rapidly growing UcaaS trend with their existing infrastructure which most OTT players are unable to do.
As the US President Theodore Roosevelt said, “Do what you can, with what you have, where you are.”
Publish Date: June 24, 2016 5:00 AM
A few weeks ago I attended and spoke at ByNet Expo in Israel. I spoke in the telecom track about the “Agile Network.” Part of what I was talking about was the value of WebRTC and NFV going forward in terms of the profound impact and changes these technologies will have on telecom.
During this event, I had an opportunity to meet with a number of our customers. One of the interesting customers I met with was Fone.do.
As I have stated last summer, WebRTC has moved away from the hype phase and into implementation phase. And Fone.do is definitely one of those companies that are in the implementation phase of WebRTC. In fact, they’ve built a cloud-based PBX, targeted at small businesses, entirely from WebRTC. They bring a “web” mindset to the party. For instance, when putting your address into their system, they’ll bring up a google map to show you. Not too hard to do, but it’s definitely different.
They also challenge you to set up the phone system in under 3 minutes. I was a bit dubious about this prospect, so right there in the meeting, I became a small business owner and I set out to set up a phone system for my fictitious 5 person company. We each got phone numbers, made some calls, left some voice mails, etc. It was pretty easy to do. So if you are a small business owner in the market for a cloud-based PBX, check them out. They should change their slogan to “Even a VP can set up a phone system in under 3 minutes.” Fone.do certainly makes setting up a phone system a can.do job.
I’ll have more to say about the state of WebRTC in a few weeks.
Publish Date: June 21, 2016 5:00 AM
Scaling SIP services can be tough, but it shouldn't be. Follow this ‘how-to’ guide or video (at the bottom of the post) to get your load balancer working in 10 minutes or less.
Oh how I wish this statement were always true:
if (one call works) then (multiple calls will work too)
Unfortunately, it’s not always that easy when dealing with real-time communication applications as they have unique characteristics, which can directly affect scalability. What do I mean by that? – Take for instance SIP, which inherently is a chatty protocol requiring a high level of transactions-per-second. The chattiness can range from a basic three-way handshake for INVITE to periodic INFO update messages – point being, each SIP message needs to be handled properly by the application. To properly handle will require some level of processing with finite processing per server that will impose scalability restrictions on your application. That is why we purpose built the PowerVille™ LB load balancer from the bottom-up to not only handle a high rate of real-time transactions but also to do it intelligently with service aware routing and seamless high-availability failover. Couple that with an intuitive streamlined webUI and the PowerVille™ LB will be one of the easiest experiences you’ll ever have.
This is part 1 of a multi-part ‘how-to’ series that will cover the basic installation of the PowerVille LB binaries along with configuring and testing your first SIP service. Spoiler alert: This ‘how-to’ will be the longest of the series given the need to install the binaries. All subsequent guides implementing other services will be wicked easy and short.
Be sure to reach out to me if you have any questions about the ‘how-to’ or the product.
Overview and IP address assignments:
The below diagram is a high-level visual of the components being used for this ‘how-to’ including the LinPhone soft SIP client for generating calls, the PowerVille LB and (2) PowerMedia XMS servers. I’ve left the IP address assignments from my setup unchanged but obviously your setup can use any IP scheme you’d like.
PowerVille LB Install Instructions:
Yellow highlight indicates input required
Green highlights useful information
1.) First you’ll need to download the PowerVille LB binaries by first requesting a trial copy HERE.
Note - for this 'how-to' the GA version of the PowerVille LB was v1.3.15
2.) Once you’ve received the link and downloaded, copy the load balancer .jar file to your CentOS server (root or tmp directory is fine)
3.) Log into your load balancer instance via SSH and change directory to where to you uploaded the load balancer .jar file.
4.) Run the installer script and follow the prompts for installation:
[root@loadbalancer-vfp ~]# java -jar dialogic-lb-installer-1.3.15.jar
Please enter the location of your Java JRE install that will be used to run the Load Balancer [/usr/bin/java]
[enter for default]
The list of available IP Addresses are as follows:
Please enter your IP Address that the Load Balancer will use for management traffic. [192.168.1.138]
[enter for default]
The Load Balancer needs to send and receive VIP request/ responses via a specific interface. Available interfaces are listed below:
Please enter the name of the interface you would like the Load Balancer to send and receive VIP request/ responses from the list [eth0] :
[enter for default]
Please enter a Multicast Base Address [default:184.108.40.206] :
[enter for default]
press 1 to accept, 2 to reject, 3 to redisplay
Select target path [/opt/nst-loadbalancer]
[enter for default]
The directory already exists and is not empty! Are you sure you want to install here and delete all existing files?
Press 1 to continue, 2 to quit, 3 to redisplay
* Press 1 if you would like to create a new installation of the Jetty web server
* Press 2 if you would like to install the Load Balancer Admin UI within an existing Jetty instance
Please enter a path where you would like to install the jetty web server [default: /opt/nst-loadbalancer ] :
[enter for default]
Select the packs you want to install:
 LB (The Load Balancer base Installation files)
...pack selection done.
press 1 to continue, 2 to quit, 3 to redisplay
[ Starting to unpack ]
[ Processing package: LB (1/1) ]
[ Unpacking finished ]
Install of the Load Balancer successfully complete.
The Load Balancer has been installed at the following location - /opt/nst-loadbalancer
You can now view the web admin ui at the following URL:
Login details are as follows
Username : root
Password : admin
[ Console installation done ]
PowerVille LB Configuration Instructions:
1.) Open the load balancer web UI – http://192.168.1.138:8888/lb
Login using default username and password: root/admin
2.) If the install was successful, the load balancer status should turn green. Click the ‘unlock config’ button at the top right to proceed with the configuration.
3.) First add an ‘interface’ by clicking ‘provisioning -- > interface’ on the left hand side. Then click ‘add’. Leave the default ‘eth0’ interface. Finish by clicking ‘add’
Note: My ethernet interface was 'eth0' but yours may be different based on the CentOS install.
4.) Next add a ‘Service Node’ by first clicking ‘provisioning à Service Node’ on the left hand side. Then click ‘add’. For the ‘address’ input the IP addresses of the PowerMedia XMS server (or the SIP endpoint you are sending traffic to). Finish by clicking ‘add’. Repeat the process for the second PowerMedia XMS server or SIP endpoint.
Note: My PowerMedia XMS IP address were assigned: 192.168.1.102 & 192.168.1.105
5.) Add a ‘Service VIP’ by first clicking ‘provisioning à Service VIP’ on the left hand side. Then click ‘add’. For the ‘address’ input the IP addresses for the inbound virtual IP address (IB-VIP) which will handle in incoming SIP traffic assigned. Finish by clicking ‘add’. Repeat the process for the second outbound virtual IP address (OB-VIP), which will be sending the SIP traffic to the endpoints.
Note: My IB-VIP address was assigned: 192.168.1.188 and my OB-VIP address was assigned 192.168.1.238
6.) Now that you've defined your ethernet interface, the service nodes and service virtual IP addresses, it's time to build the SIP load balancer service. First click 'services' on the left hand side. Once on the 'services' page, click 'add services' at the bottom.
8.) Next is the LB Service Configuration for the SIP service. On this page, you can configure ports, routing options logging, etc. For this 'how-to' we'll only be changing the 'Inbound VIP Bind Address' and 'Outbound VIP Bind Address' created in step 5. Click ‘next’ to continue.
9.) Next we need to link the defined nodes (SIP endpoints) to the SIP service by first by clicking ‘configure’ on the right hand side.
10.) At the 'configure nodes' page, click ‘add’. Select the ‘address’ to be the IP address of the first PowerMedia XMS (or other SIP endpoint). Repeat the process for the IP address of the second PowerMedia XMS. Click ‘add’ then ‘save’ to continue
11.) If added and configured correctly, the ‘sip_lb’ service should change to green indicating your SIP service is ready and the SIP endpoints are available.
Testing your loadbalancer SIP service:
1.) Test the new SIP load balancer service by first opening your SIP phone and make a call to:
Note - replace the @ IP address with the Inbound VIP assigned to your setup
Make sure audio has been established.
End the call and make the call again – the second SIP endpoint / PowerMedia XMS should now be receiving the call.
CONGRATS - YOU'RE DONE!!
Follow along the tutorial with Vince Puglia in this video:
Publish Date: June 16, 2016 5:00 AM
There’s been quite a buzz coming out of Apple’s recent announcement about iOS 10. What caught my eye was the part about messaging. Here at Dialogic we often highlight real time communications (RTC) solutions and how we can make those solutions great by working with partners. So when I see an article in TechCrunch with the headline “Apple’s iOS10 Finally, Truly Begins the Mobile Messaging War,” it’s something to take note of.
Real-time communications takes its form as messaging, voice, and video applications. I think the author is right in that the new battle lines for messaging solutions are being drawn around the web and applications; and how additional differentiators will be around connections and payments.
In recent times, growing number of web developers have been buying development platforms from Dialogic to incorporate real-time communications into their web-based solutions. The industry is just beginning to see this take place. There are new tools, API level programming, and development kits to make it easier for web developers to embed RTC in their application. We’ve seen quite a range of applications being developed from web-based customer service solutions to payment type applications.
Messaging has proven to be an effective and efficient as a standalone solution. It’s going to be exciting to see messaging as an integral part in a whole new range of new web-based applications.
Publish Date: June 15, 2016 5:00 AM
The Internet of Things is all about connectivity of everything. While some IoT connectivity will be from wired devices and sensors, much of it will be from mobile connections. But how does one measure mobile IoT adoption? According to the February 2016 Cisco VNI report, measuring the growth of smarter end-user devices and M2M connections is a clear indicator of the growth of IoT. And the VNI report predicts some whopping growth – from 604 million M2M connections in 2015 to 3.1 billion by 2020. Machina Research expected 24 billion connected devices by 2024. Clearly, smart cities, maintenance, automotive, healthcare, etc. are seeing the benefits of connected information.
And businesses and consumers are rushing in to either provide or obtain better customer service. Much of M2M connectivity will be from some kind of short range technology like WiFi that gets handed off to a wired network.
But on the cellular network front, will M2M really have any impact? I mean, these are short data interactions for the most part. Machina Research estimates that M2M in 2015 accounted for 2% of cellular traffic, growing to 4% by 2024.
These are pretty impressive stats. While it is not much, I was surprised by the 2% of traffic in 2015 because M2M connections just got started. Growth to 4% of the 2024 traffic is much larger than it sounds, considering the monstrous overall data growth on the cellular networks to come. So carving out a percentage growth is no mean feat. There are likely to be issues for sure, and the GSMA is wading in to try to help avoid any chaos at least on the LTE network. It is expected that the connected car segment will be using the LTE network and if we have self-driving cars by 2024, we better not have any latency. At any rate, I’m not sure if they’ll help anyone avoid anything, or muck it up, or actually help, but they are in a position to try and do something.
In couple of weeks, I’ll write a few blogs about the marriage of IoT and Real-Time Communications, so look for that.
Publish Date: June 14, 2016 5:00 AM
As we become increasingly dependent on IP networks and applications for everyday business and commerce, what used to be a “convenience” has now turned into a “necessity”.
In a briefing this week with Michael Suby, VP of Research at Frost and Sullivan, he and I spend some time talking about the increasing dependence on IP networks and the impact of broadband penetration. His research shows a steady increase in network bandwidth utilization, end-user devices, and application proliferation. The questions is, with the increasing dependence on IP networks, what risks are we taking? What are the best practices to improve reliability?
Stepping back, we talked about the evolution of IP applications, looking back to when businesses offered new services and applications to consumers as a “convenience” or to off-load work from their office staff or contact center. Self-service applications were thought of as an alternative to calling or visiting a storefront. In those early days, if the self-service application failed, a customer could always pick up the phone or run over to the local store to perform their transaction.
As adoption grew and the consumers got more comfortable with mobile “apps”, on-line transactions and virtual storefronts, what once was a “convenience” turned into to a “necessity”, essentially becoming the primary point of interaction between consumers and the business. On-line stores, banks, insurance and other industries were becoming completely dependent on their IP applications, web sites and mobile applications to generate revenue and communicate with their customers. Amazon.com, esurance, PayPal, and many other examples demonstrate the shift to applications as the primary point of interaction with customers.
With the shift, the question is: “How have network designers made those applications more reliable?”
Michael will be kicking off a discussion on this topic and take a closer look at the role of load balancers in service reliability during a webinar I’ll be hosting titled: “Service Reliability of IP-based Communications is Not Optional” a one-hour live event on Friday, June 24th at 11 AM ET. Also joining us will also be James Rafferty, Product Line Manager for Dialogic, explaining some of the techniques available to improve service reliability of IP networks.
We’d like to invite you to register for the event and join us for the live event, giving you an opportunity to pose questions and interact.
Publish Date: June 13, 2016 5:00 AM
In my previous blog, I shared my thoughts on how Visual IVR, or Visual Interactive Voice Response, is an ideal service for Mobile Network Operators (MNOs) to run on their new LTE networks. What makes Visual IVR ideal for LTE is the need for speed, since the visual content is web-based (unlike Video IVR where the content is streaming together with the audio). In many cases, LTE also provides the added ability to simultaneously manage a voice call and data to a network. In short, Visual IVR enables the caller to make choices both visually and audibly by syncing the audio and visual portions of the call and LTE’s rollout helps make this happen.
Currently, Visual IVR is being used primarily in mobile customer self-service. By providing a simultaneous visual alternative to navigating voice-only IVR menus, Visual IVR enhances the self-service process in a number of ways. For example, unlike voice solutions that can only speak one option at a time, Visual IVR displays a full set of menu options on a device’s screen at one time, allowing users to quickly choose the path that is right for them. This then leads to higher selection accuracy, lower average handling times, and of course an improved user experience.
Here is a recent installation of Visual IVR in a mobile customer care environment… A leading liquefied petroleum gas (LPG) distributor in Latin America, recently installed PowerVille™ Visual IVR from Dialogic to offer its customers a visually-enhanced self-service portal, as the number of customers accessing self-service on mobile devices continues to grow exponentially. They selected Visual IVR for a number of reasons, including simplifying the interface to its mobile customer self-service portal for services such as payments, contacting customer service, ordering product and services, and locating their nearest store (integrated with Location Based Services).
The benefits of Visual IVR are many, especially in the self-service environment, with some studies showing that a caller can navigate a visual IVR menu between four and five times quicker than a DTMF (dual-tone multi-frequency) IVR menu. For the provider, Visual IVR relieves contact center volume by diverting more calls to successful self-service interactions. This is accomplished through:
By being able to share visual content, including documents and visual media, during a standard voice call, Visual IVR offers a mobile experience that engages the caller both visually and audibly.
Check out the demo video fo PowerVille Visual IVR below.
Publish Date: June 10, 2016 5:00 AM