Dialogic - ContactCenterWorld.com Blog Page 5
A few weeks ago I attended and spoke at ByNet Expo in Israel. I spoke in the telecom track about the “Agile Network.” Part of what I was talking about was the value of WebRTC and NFV going forward in terms of the profound impact and changes these technologies will have on telecom.
During this event, I had an opportunity to meet with a number of our customers. One of the interesting customers I met with was Fone.do.
As I have stated last summer, WebRTC has moved away from the hype phase and into implementation phase. And Fone.do is definitely one of those companies that are in the implementation phase of WebRTC. In fact, they’ve built a cloud-based PBX, targeted at small businesses, entirely from WebRTC. They bring a “web” mindset to the party. For instance, when putting your address into their system, they’ll bring up a google map to show you. Not too hard to do, but it’s definitely different.
They also challenge you to set up the phone system in under 3 minutes. I was a bit dubious about this prospect, so right there in the meeting, I became a small business owner and I set out to set up a phone system for my fictitious 5 person company. We each got phone numbers, made some calls, left some voice mails, etc. It was pretty easy to do. So if you are a small business owner in the market for a cloud-based PBX, check them out. They should change their slogan to “Even a VP can set up a phone system in under 3 minutes.” Fone.do certainly makes setting up a phone system a can.do job.
I’ll have more to say about the state of WebRTC in a few weeks.
Publish Date: June 21, 2016 5:00 AM
Scaling SIP services can be tough, but it shouldn't be. Follow this ‘how-to’ guide or video (at the bottom of the post) to get your load balancer working in 10 minutes or less.
Oh how I wish this statement were always true:
if (one call works) then (multiple calls will work too)
Unfortunately, it’s not always that easy when dealing with real-time communication applications as they have unique characteristics, which can directly affect scalability. What do I mean by that? – Take for instance SIP, which inherently is a chatty protocol requiring a high level of transactions-per-second. The chattiness can range from a basic three-way handshake for INVITE to periodic INFO update messages – point being, each SIP message needs to be handled properly by the application. To properly handle will require some level of processing with finite processing per server that will impose scalability restrictions on your application. That is why we purpose built the PowerVille™ LB load balancer from the bottom-up to not only handle a high rate of real-time transactions but also to do it intelligently with service aware routing and seamless high-availability failover. Couple that with an intuitive streamlined webUI and the PowerVille™ LB will be one of the easiest experiences you’ll ever have.
This is part 1 of a multi-part ‘how-to’ series that will cover the basic installation of the PowerVille LB binaries along with configuring and testing your first SIP service. Spoiler alert: This ‘how-to’ will be the longest of the series given the need to install the binaries. All subsequent guides implementing other services will be wicked easy and short.
Be sure to reach out to me if you have any questions about the ‘how-to’ or the product.
- CentOS 6.4 or CentOS 7 minimal installed - Link to download
- Stop and disable firewall service
- Disable SELinux
- A SIP phone for generating calls - Link to download
- At least (1) PowerMedia XMS server installed or SIP endpoint for receiving the SIP call
- 'How-to' install PowerMedia XMS
Overview and IP address assignments:
The below diagram is a high-level visual of the components being used for this ‘how-to’ including the LinPhone soft SIP client for generating calls, the PowerVille LB and (2) PowerMedia XMS servers. I’ve left the IP address assignments from my setup unchanged but obviously your setup can use any IP scheme you’d like.
PowerVille LB Install Instructions:
Yellow highlight indicates input required
Green highlights useful information
1.) First you’ll need to download the PowerVille LB binaries by first requesting a trial copy HERE.
Note - for this 'how-to' the GA version of the PowerVille LB was v1.3.15
2.) Once you’ve received the link and downloaded, copy the load balancer .jar file to your CentOS server (root or tmp directory is fine)
3.) Log into your load balancer instance via SSH and change directory to where to you uploaded the load balancer .jar file.
4.) Run the installer script and follow the prompts for installation:
[root@loadbalancer-vfp ~]# java -jar dialogic-lb-installer-1.3.15.jar
Please enter the location of your Java JRE install that will be used to run the Load Balancer [/usr/bin/java]
[enter for default]
The list of available IP Addresses are as follows:
Please enter your IP Address that the Load Balancer will use for management traffic. [192.168.1.138]
[enter for default]
The Load Balancer needs to send and receive VIP request/ responses via a specific interface. Available interfaces are listed below:
Please enter the name of the interface you would like the Load Balancer to send and receive VIP request/ responses from the list [eth0] :
[enter for default]
Please enter a Multicast Base Address [default:22.214.171.124] :
[enter for default]
press 1 to accept, 2 to reject, 3 to redisplay
Select target path [/opt/nst-loadbalancer]
[enter for default]
The directory already exists and is not empty! Are you sure you want to install here and delete all existing files?
Press 1 to continue, 2 to quit, 3 to redisplay
* Press 1 if you would like to create a new installation of the Jetty web server
* Press 2 if you would like to install the Load Balancer Admin UI within an existing Jetty instance
Please enter a path where you would like to install the jetty web server [default: /opt/nst-loadbalancer ] :
[enter for default]
Select the packs you want to install:
 LB (The Load Balancer base Installation files)
...pack selection done.
press 1 to continue, 2 to quit, 3 to redisplay
[ Starting to unpack ]
[ Processing package: LB (1/1) ]
[ Unpacking finished ]
Install of the Load Balancer successfully complete.
The Load Balancer has been installed at the following location - /opt/nst-loadbalancer
You can now view the web admin ui at the following URL:
Login details are as follows
Username : root
Password : admin
[ Console installation done ]
PowerVille LB Configuration Instructions:
1.) Open the load balancer web UI – http://192.168.1.138:8888/lb
Login using default username and password: root/admin
2.) If the install was successful, the load balancer status should turn green. Click the ‘unlock config’ button at the top right to proceed with the configuration.
3.) First add an ‘interface’ by clicking ‘provisioning -- > interface’ on the left hand side. Then click ‘add’. Leave the default ‘eth0’ interface. Finish by clicking ‘add’
Note: My ethernet interface was 'eth0' but yours may be different based on the CentOS install.
4.) Next add a ‘Service Node’ by first clicking ‘provisioning à Service Node’ on the left hand side. Then click ‘add’. For the ‘address’ input the IP addresses of the PowerMedia XMS server (or the SIP endpoint you are sending traffic to). Finish by clicking ‘add’. Repeat the process for the second PowerMedia XMS server or SIP endpoint.
Note: My PowerMedia XMS IP address were assigned: 192.168.1.102 & 192.168.1.105
5.) Add a ‘Service VIP’ by first clicking ‘provisioning à Service VIP’ on the left hand side. Then click ‘add’. For the ‘address’ input the IP addresses for the inbound virtual IP address (IB-VIP) which will handle in incoming SIP traffic assigned. Finish by clicking ‘add’. Repeat the process for the second outbound virtual IP address (OB-VIP), which will be sending the SIP traffic to the endpoints.
Note: My IB-VIP address was assigned: 192.168.1.188 and my OB-VIP address was assigned 192.168.1.238
6.) Now that you've defined your ethernet interface, the service nodes and service virtual IP addresses, it's time to build the SIP load balancer service. First click 'services' on the left hand side. Once on the 'services' page, click 'add services' at the bottom.
8.) Next is the LB Service Configuration for the SIP service. On this page, you can configure ports, routing options logging, etc. For this 'how-to' we'll only be changing the 'Inbound VIP Bind Address' and 'Outbound VIP Bind Address' created in step 5. Click ‘next’ to continue.
9.) Next we need to link the defined nodes (SIP endpoints) to the SIP service by first by clicking ‘configure’ on the right hand side.
10.) At the 'configure nodes' page, click ‘add’. Select the ‘address’ to be the IP address of the first PowerMedia XMS (or other SIP endpoint). Repeat the process for the IP address of the second PowerMedia XMS. Click ‘add’ then ‘save’ to continue
11.) If added and configured correctly, the ‘sip_lb’ service should change to green indicating your SIP service is ready and the SIP endpoints are available.
Testing your loadbalancer SIP service:
1.) Test the new SIP load balancer service by first opening your SIP phone and make a call to:
Note - replace the @ IP address with the Inbound VIP assigned to your setup
Make sure audio has been established.
End the call and make the call again – the second SIP endpoint / PowerMedia XMS should now be receiving the call.
CONGRATS - YOU'RE DONE!!
Follow along the tutorial with Vince Puglia in this video:
Publish Date: June 16, 2016 5:00 AM
There’s been quite a buzz coming out of Apple’s recent announcement about iOS 10. What caught my eye was the part about messaging. Here at Dialogic we often highlight real time communications (RTC) solutions and how we can make those solutions great by working with partners. So when I see an article in TechCrunch with the headline “Apple’s iOS10 Finally, Truly Begins the Mobile Messaging War,” it’s something to take note of.
Real-time communications takes its form as messaging, voice, and video applications. I think the author is right in that the new battle lines for messaging solutions are being drawn around the web and applications; and how additional differentiators will be around connections and payments.
In recent times, growing number of web developers have been buying development platforms from Dialogic to incorporate real-time communications into their web-based solutions. The industry is just beginning to see this take place. There are new tools, API level programming, and development kits to make it easier for web developers to embed RTC in their application. We’ve seen quite a range of applications being developed from web-based customer service solutions to payment type applications.
Messaging has proven to be an effective and efficient as a standalone solution. It’s going to be exciting to see messaging as an integral part in a whole new range of new web-based applications.
Publish Date: June 15, 2016 5:00 AM
The Internet of Things is all about connectivity of everything. While some IoT connectivity will be from wired devices and sensors, much of it will be from mobile connections. But how does one measure mobile IoT adoption? According to the February 2016 Cisco VNI report, measuring the growth of smarter end-user devices and M2M connections is a clear indicator of the growth of IoT. And the VNI report predicts some whopping growth – from 604 million M2M connections in 2015 to 3.1 billion by 2020. Machina Research expected 24 billion connected devices by 2024. Clearly, smart cities, maintenance, automotive, healthcare, etc. are seeing the benefits of connected information.
And businesses and consumers are rushing in to either provide or obtain better customer service. Much of M2M connectivity will be from some kind of short range technology like WiFi that gets handed off to a wired network.
But on the cellular network front, will M2M really have any impact? I mean, these are short data interactions for the most part. Machina Research estimates that M2M in 2015 accounted for 2% of cellular traffic, growing to 4% by 2024.
These are pretty impressive stats. While it is not much, I was surprised by the 2% of traffic in 2015 because M2M connections just got started. Growth to 4% of the 2024 traffic is much larger than it sounds, considering the monstrous overall data growth on the cellular networks to come. So carving out a percentage growth is no mean feat. There are likely to be issues for sure, and the GSMA is wading in to try to help avoid any chaos at least on the LTE network. It is expected that the connected car segment will be using the LTE network and if we have self-driving cars by 2024, we better not have any latency. At any rate, I’m not sure if they’ll help anyone avoid anything, or muck it up, or actually help, but they are in a position to try and do something.
In couple of weeks, I’ll write a few blogs about the marriage of IoT and Real-Time Communications, so look for that.
Publish Date: June 14, 2016 5:00 AM
As we become increasingly dependent on IP networks and applications for everyday business and commerce, what used to be a “convenience” has now turned into a “necessity”.
In a briefing this week with Michael Suby, VP of Research at Frost and Sullivan, he and I spend some time talking about the increasing dependence on IP networks and the impact of broadband penetration. His research shows a steady increase in network bandwidth utilization, end-user devices, and application proliferation. The questions is, with the increasing dependence on IP networks, what risks are we taking? What are the best practices to improve reliability?
Stepping back, we talked about the evolution of IP applications, looking back to when businesses offered new services and applications to consumers as a “convenience” or to off-load work from their office staff or contact center. Self-service applications were thought of as an alternative to calling or visiting a storefront. In those early days, if the self-service application failed, a customer could always pick up the phone or run over to the local store to perform their transaction.
As adoption grew and the consumers got more comfortable with mobile “apps”, on-line transactions and virtual storefronts, what once was a “convenience” turned into to a “necessity”, essentially becoming the primary point of interaction between consumers and the business. On-line stores, banks, insurance and other industries were becoming completely dependent on their IP applications, web sites and mobile applications to generate revenue and communicate with their customers. Amazon.com, esurance, PayPal, and many other examples demonstrate the shift to applications as the primary point of interaction with customers.
With the shift, the question is: “How have network designers made those applications more reliable?”
Michael will be kicking off a discussion on this topic and take a closer look at the role of load balancers in service reliability during a webinar I’ll be hosting titled: “Service Reliability of IP-based Communications is Not Optional” a one-hour live event on Friday, June 24th at 11 AM ET. Also joining us will also be James Rafferty, Product Line Manager for Dialogic, explaining some of the techniques available to improve service reliability of IP networks.
We’d like to invite you to register for the event and join us for the live event, giving you an opportunity to pose questions and interact.
Publish Date: June 13, 2016 5:00 AM
In my previous blog, I shared my thoughts on how Visual IVR, or Visual Interactive Voice Response, is an ideal service for Mobile Network Operators (MNOs) to run on their new LTE networks. What makes Visual IVR ideal for LTE is the need for speed, since the visual content is web-based (unlike Video IVR where the content is streaming together with the audio). In many cases, LTE also provides the added ability to simultaneously manage a voice call and data to a network. In short, Visual IVR enables the caller to make choices both visually and audibly by syncing the audio and visual portions of the call and LTE’s rollout helps make this happen.
Currently, Visual IVR is being used primarily in mobile customer self-service. By providing a simultaneous visual alternative to navigating voice-only IVR menus, Visual IVR enhances the self-service process in a number of ways. For example, unlike voice solutions that can only speak one option at a time, Visual IVR displays a full set of menu options on a device’s screen at one time, allowing users to quickly choose the path that is right for them. This then leads to higher selection accuracy, lower average handling times, and of course an improved user experience.
Here is a recent installation of Visual IVR in a mobile customer care environment… A leading liquefied petroleum gas (LPG) distributor in Latin America, recently installed PowerVille™ Visual IVR from Dialogic to offer its customers a visually-enhanced self-service portal, as the number of customers accessing self-service on mobile devices continues to grow exponentially. They selected Visual IVR for a number of reasons, including simplifying the interface to its mobile customer self-service portal for services such as payments, contacting customer service, ordering product and services, and locating their nearest store (integrated with Location Based Services).
The benefits of Visual IVR are many, especially in the self-service environment, with some studies showing that a caller can navigate a visual IVR menu between four and five times quicker than a DTMF (dual-tone multi-frequency) IVR menu. For the provider, Visual IVR relieves contact center volume by diverting more calls to successful self-service interactions. This is accomplished through:
- Visual Navigation — faster than listening to audio-only prompts
- Increased Accuracy —caller can read and reread options before making a selection
- Information-Rich Input — complex alphanumeric data can easily be collected
By being able to share visual content, including documents and visual media, during a standard voice call, Visual IVR offers a mobile experience that engages the caller both visually and audibly.
Check out the demo video fo PowerVille Visual IVR below.
Publish Date: June 10, 2016 5:00 AM
Facebook is at it again. Back in 2011, Facebook formally kicked off the Open Compute Project (OCP) along with companies like Rackspace, Intel, and Goldman Sachs. The intent was to share ideas and figure out ways to build the most efficient computing infrastructure at the lowest possible cost. The various projects were set up in an open source model to help hardware with more efficient, more scalable and more flexible platforms for computing, storage, and networking. There are now more than 150 member companies such as Apple, Google, and Microsoft, and recently this year service providers AT&T, Deutsche Telekom AG., EE, and Verizon all joined the project as well.
At the recent BCE event in Austin, Facebook was pushing something relatively new: the Telecom Infra Project, or TIP.The stated goal of this project was to “reimagine the traditional approach to building and deploying telecom network infrastructure.” TIP is building on the open, community-led OCP as a model to drive innovation into the traditional telecommunications infrastructure, and has established inaugural projects in three basic areas:
- Access – The focus here is on system integration and site optimization, access unbundling and media friendly solutions to more cost effectively serve difficult-to-access rural and urban areas and identify methods to improve throughput and the user experience by moving compute and storage resources closer to the network edge
- Backhaul – Open Optical Packet Transport and high frequency autonomic access activities are the focus of this project that aims to define a Dense Wavelength Division Multiplexing (DWDM) open packet transport architecture that avoids implementation lock-ins and a lightweight and extensible software stack for routing, addressing, and security in packet switched IP networks
- Core and Management – Core network optimization and greenfield network solutions are addressed here by deconstructing and disaggregating traditional core network bundled components and evolving telecom networks from the ground-up to be more efficient and IT-oriented.
I spoke with Hans-Juergen Schmidtke, Director of Engineering Infrastructure Foundation at Facebook, who gave a keynote at BCE in which he emphasized that Facebook did not want to be viewed as a telco. He added that that TIP was started in order to reimagine telco infrastructure, and one of the goals of the project would be to build infrastructure - hardware and software - for the telecom industry and change the concept of innovation in a telco environment.
The Facebook initiated Telecom Infra Project is modeled on the successful OCP to drive innovation and openness into telecom hardware and software infrastructure
Service Providers and equipment vendors have started to jump on board. EE, SK Telecom, Deutsch Telekom, Globe Telecom, Intel, and Nokia are all initial members. So it seems that the same disruptive approach to the computing and data center architecture is being applied to telecom infrastructure. How will this align with what is going on in the ETSI NFV, OpenStack, 3GPP and other SDOs and open source activities that impact infrastructure functionality and end-to-end service orchestration? Does it even affect them? What innovation is lacking at the communications infrastructure and application layer that this project thinks it needs to address for hyperscale data center environments? The fact that operators are jumping on board along with major players from the vendor community tends to lend credence to this movement. What do you think? Tweet us at @Dialogic and let us know.
Publish Date: June 9, 2016 5:00 AM
On April 19, Dialogic’s Alan Percy hosted a webinar on “Application Development Best Practices.” To listen to the webinar, please click here. While I was listening to that webinar, I had my Product Management hat on. In my blogs, I typically write about what’s going on in the market, but today will be different. I’m going to get into my product management persona for a bit.
Everything they talked about in the webinar, such as using an Open Architecture, looking towards the future, and having mobile in mind is excellent advice. However, no matter how you cut it, one big item is understanding the requirements before you start. Even in an agile development method, one needs to understand the requirements. Agile doesn’t mean you just go for it but the team discusses the requirements and prepares for what they need to do. That is going to save you time and money in the long run.
At any rate, go forth and develop. Just remember to think a little bit about the requirements before you start.
Publish Date: June 7, 2016 5:00 AM
With the Global Mobile Suppliers Association reporting a total of 494 LTE-based mobile data networks commercially deployed across 162 countries, it is reasonable to expect that rollout of closely associated voice-over-LTE (VoLTE) services will accelerate within the next few years. And as these VoLTE deployments accelerate, increasing numbers of end users will experience first-hand a remarkable improvement in the clarity of voice conversations along with an improvement in the ability to understand highly accented speech. These advantages are a direct result of VoLTE’s use of High Definition Voice (HD Voice) digital media formats.
In the near future, HD Voice will likely become a significant differentiator for mobile service providers, especially as market competition intensifies. In fact, voice quality plays such a critical role in mobile networks that the 3rd Generation Partnership Project (3GPP) organization has standardized a newer Enhanced Voice Services (EVS) media format that offers full compatibility with existing HD Voice formats while providing an even greater sense of conversation “naturalness.” Accordingly, with time-to-market and innovation as two keys to business success, it is not unreasonable to forecast that cutting-edge LTE service providers will likely deploy this new “being there” EVS voice technology in the not too distant future.
In the immediate term, the global rollout of VoLTE services will force mobile operators to reevaluate their end-to-end connectivity strategies, and to scrutinize the capabilities of their interconnect partners, both nationally and internationally. When HD Voice calls are placed wholly within a single IMS VoLTE network (between two HD Voice capable handsets), both parties on the call experience a “High Definition Voice” conversation. However, if an HD Voice call originates in one VoLTE network and terminates in a different VoLTE network, then whether or not this conversation takes place in HD Voice depends on the capabilities of any associated interconnect network operators, and more specifically, on their ability to support end-to-end HD Voice sessions. For this reason, the coming deployments of HD Voice service by mobile operators will create new interconnect opportunities. By differentiating with end-to-end HD Voice connectivity and transcoding services, interconnect carriers will be able to meet the needs of VoLTE users for both HD Voice connectivity and seamless interworking with disparate user devices such as webRTC soft clients.
Quoting from a May 2016 i3forum report, “60% of interconnect carriers still have over half of their international interconnect using TDM.” Over the next few years, as more mobile operators require end-to-end connectivity for their VoLTE HD Voice services, interconnect carriers should anticipate decreased demand for lower cost TDM links and increased demand for all-IP end-to-end HD Voice interconnect solutions. The conclusion here is that a compelling growth opportunity exists for agile network operators that support carrier grade IP interconnect solutions and enable HD Voice conversations.
As a key network element providing secure real-time communication sessions and IP-to-IP transcoding at interconnect borders, Session Border Controllers will remain critically important to every network operator’s success, both today and in the coming future. To learn how Dialogic Session Border Controllers bridge the gap between COTS and cloud with both appliance-based and fully virtualized solutions, simply click below and download a Dialogic BorderNet Session Border Controller Solution Brief.
Publish Date: June 6, 2016 5:00 AM
The past few weeks, I have mentioned the February 2016 Cisco VNI report to make some points about WiFi. However, the Cisco VNI report also has some other very interesting information that I wanted to point out in the next few blogs.
Today, I want to make some points about mobile video. As readers of this blog know, I have been very bullish about the potential of mobile video. 3G was the technology that enabled mobile video, but there were clearly limitations (the spinning circle became pretty ubiquitous to those of us that tried mobile video on 3G) and people used it only if they were committed to it. But with WiFi and 4G, bandwidth availability and improved speeds have enabled video to be similar to a wired home experience. And with larger screens, the viewing experience is also better. As such, it’s no surprise that the Cisco VNI reports that mobile video traffic accounted for 55 percent of total mobile data traffic in 2015, and that by 2020 video will account for 75 percent of total mobile data traffic.
What video are people watching? By all accounts, streaming in one form or another, ranging from YouTube and Netflix, accounts for most of the video. And we’re seeing more and more video advertisements as well, which isn’t surprising considering we see advertisements all the time if we go online from our wired home computer. That model is tried and true. And while streaming will ultimately continue to dominate the mobile video space, video value-added services are becoming more and more prevalent. Services like video chatting, video messaging, making video conference calls / having video collaboration calls, and video IVRs are finding their way.
Publish Date: May 31, 2016 5:00 AM
Services, Services, Services… you can practically hear the cry of Mobile Network Operators (MNOs) all over the world pleading for new (or even old) services that can run on their shiny new LTE networks. Of course, this is nothing new, as this repeated cry for services is generated with the roll-out of every new generation of network (e.g. 2G, 3G), because MNOs are well aware that the serious payoff comes primarily from running new services, which in turn can justify their investment.
FYI, in this blog I won’t address what over-the-top (OTT) services are doing to the bottom line of MNOs, which goes without saying is why new services offered by the MNOs are so critical.
One service that is ideal for LTE is Visual IVR, or Visual Interactive Voice Response (different than Video IVR). At this point I know what a lot of you are thinking…IVR is dead, so why resuscitate it for a new network? My short answer is that Visual IVR is not your parent’s IVR.
FYI, in this blog I also won’t address the fact that IVR is not dead, but rather it is one of those unique applications that is continuously morphing itself into new services. For example, Visual IVR extends the capabilities of the IVR by transforming it into a collaborative web-based voice and visual mobile application for smartphones, tablets, and computers.
As the name clearly implies, Visual IVR adds a visual interface to the audio-only IVR by visually representing an IVR menu on the caller’s smartphone or computer. What makes Visual IVR ideal for LTE is the need for speed since the visual content is web-based (unlike Video IVR where the content is streaming together with the audio). In many cases, LTE also provides the added ability to simultaneously manage a voice call and data to a network. In short, Visual IVR enables the caller to make choices both visually and audibly by syncing the audio and visual portions of the call, and LTE’s rollout helps make this happen. In fact, Visual IVR not only visually represents the menu, but also allows for more content to be pushed out to the caller. Content such as documents and visual media.
Visual IVR brings with it a lot of benefits…
- Increased selection accuracy lowers average handling times/call duration (visual navigation is faster than listening to audio prompts).
- Intuitive visual navigation improves first call self-service resolution rates better than speech recognition.
- Information rich input allows for the easy collection of complex alpha-numeric data better than speech recognition.
- Simultaneous interactive two-way voice and data interaction reduces the need for agent involvement.
- Omni-channel experience enables the caller to start a chat or text session, send an e-mail, request a callback, or transfer to an agent.
- Secure communication channel for data exchange to and from the IVR eliminates data theft.
- Sharing visual content during conversations can boost comprehension and recall up to 600% (John Medina, Brain Rules).
As you can see, Visual IVR is an ideal service for MNOs to run on their new LTE networks. It is simple to use yet offers callers great benefits, especially when it comes to Contact Center services such as self-service customer care. That said, as is always the case, the success of any service, including Visual IVR, comes through the MNO’s knowledge of their subscribers and how they best want their information delivered on this new and shiny network.
Publish Date: May 27, 2016 5:00 AM
In my previous blog, I spoke about AT&T’s thought leadership session at the recent ETSI NFV ISG. They explored a wide ranging of topics including; revenue opportunities from the cloud, NFV, SDN, 5G and reimagining the central office. You can read that blog by clicking here. While I was expecting more insights into NFV, on-going proof of concepts, and updates on new SDN-based services, I was pleasantly surprised to hear AT&T’s VP of Brand Identity Gregg Heard give one of the more attention-getting presentations on something you wouldn’t hear every day at a technical specifications conference; the AT&T brand strategy. But when you think about it, this makes a lot of sense. Service providers should be very cognizant of how consumers perceive their brand as they try to evolve themselves from voice, data and pipes to applications, entertainment, and IoT services like smart home automation.
Clearly there’s this divide between what people think about their mobile carrier and what they think about their super cool smartphone, tablet, or wearable device, so how should a mobile operator position itself to get its more than fair share of recognition with subscribers as the app and device developers? Gregg indicated that they’re making sure that at AT&T, its brand is in the center of all considerations when it comes to presenting the company. This attention to branding is having an impact on everything from vehicles to uniforms, their cool looking hardhats, and even the legal language they use on customer facing documents. Gregg also talked about the emphasis AT&T is putting on its sonic branding (second most recognized of AT&T’s brands). But what I thought was missing from the discussion was what AT&T is doing from an employer branding perspective especially in light of their massive undertaking to retool its employees when it comes to next generation cloud technologies, NFV and SDN.
In an earlier post, I talked about AT&T’s nanodegree program that the service provider has made available to its work force to bring them up to speed on the new software-centric technologies. In addition, many tech companies are starting to see that job seekers are also eventual consumers, and if potential employee candidates deem that a company is not good enough to work for, they sure as hell aren’t good enough to buy from either. Brand identification has a lot to do with treating “job candidates as eventual customers” conceptually. Somehow, their efforts should get them some improved marks the area of employer branding.
So in summary, service providers are going through transformational activities internally and externally as they change their emphasis to a cloud-centric delivery model. Not only is there impact to the People, Processes, and Products of these companies, there’s a fourth “P” undergoing change, and that is the Perception of the brand. In order to pivot into this technology turn, should service providers try to change the way their contribution to the value chain is being perceived by consumers of their brand? It definitely wouldn’t hurt. Let us know what you think by tweeting us at @Dialogic. Also, let me know how many acronyms you think I used in these past two blogs – I may send the first person who gets it right a prize.
Publish Date: May 25, 2016 5:00 AM
As many of you know, ITW has historically been about wholesale voice minutes exchange. But as voice minutes exchange has lessened in importance in the industry (due to OTT Peer to Peer offerings such as Skype, etc.), this show has been more about what these minutes exchangers should do to grow. As an example, a couple of years ago, I created a marketing piece specifically for this show about HD Voice due to the coming VoLTE offerings or WebRTC offerings, and transcoding this to other formats so that HD voice minutes could be part of an offering. People looked at me askew those few years ago. But now, everyone knows about HD Voice so the concept of additional services is clearly taking root.
The i3Forum is looking at these issues as well. There are many working groups devoted to various service improvement offerings, with both vendors and service providers participating on these working groups. The fact that there are working groups devoted to these service offerings is a big step towards trying to work out a growth strategy.
During 2016 ITW, NFV also took a more prominent role. One reason, obviously, is because this is a very good vehicle for cost reduction. But really, the concept of service agility and adding services quickly is another great reason this was a topic for ITW.
As always, ITW was a great show for Dialogic because of all the meetings we had, and I look forward to continuing to come to this show as the whole audience pivots to a service oriented growth strategy.
Publish Date: May 24, 2016 5:00 AM
AT&T hosted the recent ETSI NFV ISG conference in Atlanta and kicked things off with a thought leadership session that spanned several network transformation topics including cloud-centric revenue opportunities, NFV, SDN, 5G, and branding – yes, branding.
Bala Thekkedath, Director of Marketing, and Dossevi Trenou, Chief Technologist for Hewlett Packard Enterprise kicked things off with a discussion on something we all like to hear about – new revenue opportunities. The omnipresent OTT threat of course came up, and they suggested to the crowd (made up of service providers and vendors) to proceed down the peaceful coexistence route and focus on each of the respective core strengths of these seemingly contraposed parties. Probably the most interesting aspect of the discussion was around the concept of the enterprise as a Virtual Mobile Network Operator or VMNO which in the future could be supported by network slicing techniques (keep reading).
Tom Anschutz got the audience up to speed on AT&T’s CORD initiative. CORD is not just any old four letter word, it’s an acronym for Central Office Re-architected Datacenter. CORD, a collaborative effort between AT&T and ON.Lab, combines NFV, SDN and cloud concepts along with commodity hardware in order to build out an agile and programmable central office infrastructure designed for rapid deployment of services. Virtualized network functions such as firewalls, parental control applications and caching along with OLT, CPE and broadband network gateways run on commodity servers managed by an open VNF manager that leverages OpenStack. You can download an informative whitepaper on the architecture here.
Hank Kafka, VP of Access Architecture and Analytics, provided insight into AT&T’s vision for its 5G architecture and direction on virtualization. 5G is not only really fast connectivity (mobile broadband speeds over 56 Gbps) and but also improved connection densities both of which are needed to support the massive amounts of IoT/connected devices and near real-time applications that we know are coming. The low latency characteristics of 5G are important for real-time remote manipulation of devices, industrial controls, and applications such auto collision detection. Hank indicated we’ll start to see pre-5G with the coming of the 2018 Korean Winter Olympics. So we’ll get a taste of what’s in store when the time comes for 5G to start rolling out in areas outside Asia Pacific.
But one of the key takeaways I noted was the call for the core network to be reinvented when it comes to 5G. 5G use cases will definitely have an impact on core network design, and while there will be the same radio resources basically, they will be used in different ways. In a 5G world, the various devices will have a wide range of speed demands, a wide range of latency requirements and wide range of mobility needs. The “one size fits all network” that we have today is not in tune with new device and use case trends. The networks deployed were originally optimized for voice, but Internet demand has driven new generations of RAN and core technology and the new array of connected devices for vehicles, and wearables, and remote sensors –all of which will have different mobility needs.
This observation was a natural lead-in to the next concept Hank brought up, which was network slicing. You can read about this very cool concept in a blog that I wrote a few weeks back. With network slicing there could be multiple instances of virtualized network functions that could exist - each allocated to a specific network slice. Each different network slice would be optimized, orchestrated and functionally equipped for a specific use case, device or group of subscribers. This would enable the dynamic and automatic orchestration, addition or removal of network functions that provide the services in that network slice. One of the obvious demands going forward would be the need for new array of key functional richness obviously tailored to the requirements of the various slices. The main point I came away with was Hank’s comment that network slicing is a concept that is only possible with the use of NFV technologies.
Alan Blackburn, VP of Architecture and Planning, reiterated A&T’s goal to virtualize and - more importantly - control 75% of its network using cloud infrastructure and SDN by 2020. The “why” to this “what” was because of the exploding traffic volumes and the realization that they can’t build networks any more in the traditional manner due to the sheer tonnage of video traffic and massive amount of IoT sessions that they are experiencing. The range of traffic that networks have to carry will be vastly different.
AT&T recently released a white paper on its virtualization and NFV architecture framework that talks about their ECOMP (Enhanced Control, Orchestration, Management and Policy) software platform. ECOMP is one of the three pillars - in addition to NFV and SDN - of its Domain 2.0 (D2) initiative. Together, these three frameworks are expected to enable AT&T to realize improved efficiency, reduced cycle times, and the ability to rollout innovative services at a faster rate.
ECOMP is a critical component in achieving AT&T’s D2 imperatives and it’s basically the brains of their D2 strategy. It provides closed loop automation and service instantiation to help rapidly on-board new services created either by AT&T or third party providers. While it’s designed to help reduce CAPEX and OPEX, D2 is a transformative initiative that will enable AT&T network services and infrastructure to be used, provisioned and orchestrated in a manner as is typical of cloud services in data centers. The challenge with a framework like Domain 2.0 is that there naturally has to be a “3.0” version, but AT&T is already starting to think about what that will look like.
All in all, the AT&T thought leadership session was a whirlwind of concepts that covered more than NFV, SDN, 5G, revenue opportunities, the “reimagining of the central office” and a healthy dose of acronyms. But what about branding? How does a company’s branding activity intersect with all the cloud technology initiatives discussed? That was one of the more interesting topics which I’ll talk about in my next blog. Stay tuned!
Publish Date: May 20, 2016 5:00 AM
These last few months we’ve increasingly found ourselves talking with customers about scaling application deployments. A pair of issues seems to come up over and over again: scaling and reliability. Once the proof-of-concept version of an application is done, it’s time to start thinking about how to deploy the application in large-scale. Like building a large office building, it all starts with a solid foundation. Not any foundation – one that is designed for the specific structure it will eventually support.
Let’s step back and look at the challenge.
To properly scale an application, you have to assume that no one server can support all the traffic and computational resources needed to meet large customer demand. Spreading the effort over multiple servers also gives a reliability gain. But users don’t want or need to keep track of multiple server addresses – they want one URL to access the application, even though that request will be handled by any one of a number of servers. In the web world, this is accomplished with an Application Delivery Controller (ADC), essentially a load balancer providing a front-end to an array of application servers that serve web pages to users.
As communications applications have moved to use more web-based technologies, they too can use a similar architecture, using a common point of entry that will distribute workload across an array of application servers. However, real-time communications applications have a number of unique needs that are often overlooked, making ADCs a poor fit for the job at-hand, spawning the need for a purpose-built Load Balancer for Real-Time Communications:
Latency Sensitivity – voice and video conversations and protocols are much more sensitive to delays that would not be noticed during a web session.
Carrier-Grade Reliability – with real-time communications being adopted in Emergency Services and other critical infrastructure applications, reliability is far greater of an issue when compared to an e-commerce site.
Service Affinity – collaboration applications benefit tremendously by bringing all the parties together on a single application server. Doing so requires that session routing has the ability bring multiple users together.
Cloud-Ready Software – with many communications applications destined for the cloud in a software-only deployment model, the associated load balancing function must also be a software-only solution that can be deployed in a virtual environment.
Deployment Simplicity – SIP, WebRTC and other protocols are the staple of communications applications. Support for these protocols should be native.
Next week, James Rafferty, Product Line Manager at Dialogic and I will be hosting a webinar titled “Scaling Real-Time Communications Applications with Load Balancers”. During the event we’ll be exploring these unique requirements, discuss techniques to address those needs, and provide an overview of the Dialogic PowerVille LB – Load Balancer for Real-Time Communications. We’d love to have you join us for the live event on Thursday, May 26th at 2:00 PM ET for a deep dive into the world of large-scale deployments of applications with Load Balancers. Register Now.
Publish Date: May 19, 2016 5:00 AM