Dialogic - ContactCenterWorld.com Blog Page 4
It is pretty obvious these days that both in our professional and personal lives, telecommunications is playing a much bigger and more vital role than it has in the past. One doesn’t need to be an analyst to draw this conclusion, as the empirical evidence is pretty clear with all of the (mobile) devices (e.g. phone, tablet) everyone has one or more of. Add to this the continually growing reliability, security, and cost-efficiency of the hosted model (in the cloud), and you have enterprise and residential services that can’t be ignored.
One such service is the Hosted IP PBX for the enterprise, which is the topic of this blog and the first in my three-blog series I am calling Telecommunications in the Cloud.
The Enterprise: To buy or to lease? On-premise or in the cloud? These two questions are being asked more and more every day by enterprises of all sizes when it comes to their telecommunications needs. Both have their pros and cons, with two of the most obvious being control and cost. Large enterprises typically like to be in control of their telecommunications needs, which is why a lot, if not most, of them keep telecommunications in-house. On the other hand, cost is a top concern for small and medium-sized enterprises, which is where Hosted IP PBX finds its sweet spot.
The Network Operator: The growth and acceptance of cloud-based services combined with their existing infrastructure puts network operators in an ideal position to offer Communications-as-a-Service (CaaS) to their enterprise customers. One such CaaS is the Hosted IP PBX, which enables network operators to become “relevant” again in the lucrative enterprise market. NOTE: CaaS, such as Hosted IP PBX, can also be offered to enterprises by telecommunications service providers who are not network operators.
Knowing that Hosted IP PBX is a win-win for both network operators and enterprises, let’s look at what Hosted IP PBX has to offer. In short, Hosted IP PBX is a business-class phone service provided over the internet allowing small and medium-sized businesses to have a sophisticated telephone system without the CAPEX investment in telephone equipment. In fact, the entire telephone system is hosted (operated and maintained) at an off-site location by the network operator.
Hosted IP PBX also enhances the traditional telecom architecture by enabling a self-services component to the enterprise subscriber. Commonly through the secured web, user configuration and control is provided via a standard web-based portal that enhances basic phone functionality to make standard services like multi-device ringing, conference calling, call move, and call forwarding easier to manage and new services simpler to deploy. For example, with a simple click on a web page, the subscriber can choose to forward calls to their mobile phone or have it ring on all of their assigned numbers.
Which small and medium-sized business wouldn’t want to off-load the operation and maintenance of their telecommunications system that brings the following benefits and more?
- Personnel Savings – with employee salaries typically making up the largest portion of a business’s budget, eliminating the need for in-house telecommunications/IT staff to manage the system, address problems, perform upgrades, etc. is the first of many benefits of Hosted IP PBX.
- Total Cost of Ownership – not having to make an initial large upfront investment purchasing an office telephone system (shifting CAPEX to OPEX) allows that money to be used more strategically. Furthermore, on-going savings include low-cost inter-office/long-distance calling, and eliminating system maintenance and upgrades, with the added benefits of a predictable monthly cost and simplified vendor management for multiple services.
- Scalability – all businesses have spikes in their capacity demands, but not all have the luxury of meeting these demands, unless they have Hosted IP PBX, where they can quickly and easily expand and grow without having to add costly hardware and endure time-consuming installations.
- Flexibility – employees can work from just about anywhere that has an IP connection: home, hotel, airport, mobile phone, etc. while keeping features such as call transfer, auto attendant, music-on-hold, and conference calling, leading to enhanced productivity, all the while managing their own profiles through an on-line portal.
- Business Continuity – Hosted IP PBX service providers offer disaster recovery options as they have multiple carrier-grade hosting centers with fully redundant servers in case of emergencies, such as fire, flood, or power outage, quickly and easily routing calls to alternative locations or mobile phones, enabling for business as usual.
With most network operators looking for ways to increase revenue and not be diminished to the passive role of providing only dumb pipes, CaaS is a clear opportunity. Enterprises are always looking for ways to lower costs, while maintaining (or even increasing) their level of service and Hosted IP PBX is one such service that can accomplish both.
In part 2 of 3 of my blog on Telecommunication in the Cloud, I will discuss Class 5/Residential Services as another service network operators could consider.
Publish Date: September 2, 2016 5:00 AM
While OPNFV and Openstack provide a convenient virtualized environment for deploying and running network-oriented applications, there is another whole dimension to what may be done with it. With conventional computing, you need to wheel in another box, install things, and then modify your networked environment to take the new hardware into account. With a virtualization, you avoid dealing with hardware each time you need to increase an application’s capacity, and can “spin up” additional virtual machines and then configure the environment accordingly. And, in a well-designed OPNFV environment, all of this can be done automatically.
While OPNFV doesn’t come with a magic “gimme more” button, the components are there to put together such a button yourself. Here’s what’s involved:
- The Openstack Heat project. This “implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files that can be treated like code.” This means that you can define exactly how you want to run your networked application – size of the instance(s) used, which application image(s) you want, networks and ports that need to be created to tie things together, static vs. DHCP IP addresses, and application-specific configuration. Like other Openstack components, you have a choice of invoking Heat templates through the Horizon GUI or through a CLI. While not a true programming language, Heat’s YAML format allows for a fair amount of flexibility, which makes things readable and maintainable.
- Telemetry. OPNFV, by default, comes with a telemetry node based on the Ceilometer project. By default, this node collects Openstack performance data and uses a Mongo DB to store it. It may also be used as a convenient place to store application performance data.
- Application-specific Key Performance Indicators (KPIs). These are statistics that pertain to the application itself, rather than the platform it runs on. This could be things like the number of simultaneous users logged in, the number of licenses in use vs. the number of licenses available on the node, or the number of people using some other limited application resource. By monitoring these sorts of values, it can become apparent that additional application resources are needed, and that new VMs must be started and connected. And, the opposite can happen when resources are no longer needed.
- Heat Orchestration Templates (HOT). This might be considered the first level of automatic scaling. The infrastructure to direct Openstack on how to add additional VNF components is defined by Heat. HOT has rudimentary abilities that allow some system-oriented performance indicators to be monitored. This would include CPU and disk usage and network performance indicators. Additional VNF components are then started when the need arises, and torn down when they are no longer needed.
- Full Management and Orchestration (MANO). Not really a part of OPNFV yet, but sure to be in the future. There are a variety of MANO products and projects out there in their formative stages, including the Openstack Tacker project, Open Source MANO/Rift.io and Open Baton.
While I’ve been trying to avoid mentioning too much specific to our media server VNF, it wouldn’t be bad to use it for an example of some things that have to be taken into account when scaling applications. Let’s look at video conferencing. There is a finite capacity to the number of caller sessions that can be done on a single VM, and callers are divided up into specific, discrete conferences. What happens if we are running out of room and need to expand? We can’t just allow more callers into an already jammed conference or put them in a new conference that they aren’t supposed to be in.
Well, there is another node in our VNF call a Media Resource Broker (MRB). Here, the intelligence is found that keeps track of the multiple media servers and their capabilities - things like codecs and resolutions available. Knowing what sort of conferencing facilities are available, it is able to quickly move conferences from an almost “full” server to one with spare capacity. All of this can happen when a new caller arrives and puts the media server over the edge.
But, one thing it can’t do is start up additional media servers. It can only deal with existing servers that it already knows about. That’s where OPNFV management and orchestration come into play. When a threshold (as defined by an application KPI) is exceeded, a new media server is started. As part of its startup and configuration, it registers itself with an MRB so that the MRB becomes aware of its additional resources, and can adjust the conferences it manages accordingly.
Now, your application may well work differently, and may require different KPIs and scaling schemes. But, the principles will be the same, and it’s likely that some application involvement will be needed.
This concludes my series of OPNFV blogs for now. But, more will be sure to follow. We might want to take a deeper look into Heat templates and MANO, and there will certainly be things to say about our proposed OPNFV-based, company-wide QA and test environment that is just getting off the ground. And I’m sure there will be other topics I haven’t even thought of yet.
Thanks for reading!
Publish Date: August 25, 2016 5:00 AM
Chetan Sharma has been discoursing on what he calls the “4th wave of mobile communications” for some time. And I’ve commented on some of this from time to time. He recently put out an update of the 4th wave paper called the “4th Wave Index: Benchmarking the Growth and Evolution of the Mobile Ecosystem.” It’s an excellent paper. On the last page, there is a bold subhead that totally caught my eye. It said “Voice and Messaging Revenue line items will disappear from operating financials in the next 5-10 years.”
First of all, it does not mean that there won’t be any voice or any messaging. Obviously, there will be voice minutes, and there will be messaging such as text messages, and there will be integrated social media type of communications, etc. But it does mean a few things:
1. Voice and Messaging erosion from the apps that run on the data networks obviate any need to continually point out this negative trend. When it gets small enough, there is no need to point it out anymore.
2. Social networking provides different kinds of communications. It could incorporate voice, it could incorporate text. It’s intertwined more now and not so disparate.
3. The mobile service providers will be moving to what Chetan calls the “4th wave of mobile revenues” and will report on that.
4. The move to IP means that everything is data. Voice is a type of data. Text is a type of data. Video streaming is a type of data. Different metrics will be used to measure success.
The 4th wave is extremely exciting not only for mobile service providers, but also for application developers such as Dialogic. There are opportunities to add value in very many different ways. One such way I have written about recently is real-time communications and value-added services that go with them.
Publish Date: August 23, 2016 5:00 AM
Robocalls have been getting quite a bit of ink lately in the United States. Robocalls are those annoying auto-dialer calls you may get. The US FCC has stepped in and asked the US service providers to provide robocall blocking services. In many ways, this really is nothing new. Robocalling has been a problem for years. I had put my name on a do not call list many many years ago with great success (Go to www.donotcall.gov). However, that was my home phone. They are now coming to my mobile phone, so I’m going to have to register that phone number.
Tricking callers by displaying fake caller IDs is easier now than it’s ever been, which is one of the reasons why this issue has come up again. If you go to Google and type “robocall” the first things you see are 4 sponsored ads that enable you to send robocalls!
However, it is illegal. The FTC’s website says “If you receive a robocall trying to sell you something (and you haven’t given the caller your written permission), it’s an illegal call. You should hang up. Then, file a complaint with the FTC and the National Do Not Call Registry.”
Note you can still get phone calls from “existing relationships.” For example, I periodically get automated, sometimes even somewhat personalized, phone calls from the New York Giants or members of the New York Giants as I have season tickets with them. And I don’t think it’s possible to escape the political robocalls if you are registered with a party, though especially this year, I wish I could.
What is new is that the FCC has asked the service providers to provide call blocking services, and not leave it up to the consumer to do all this work. There are multiple solutions to this issue at the network level, one of which is putting a call blocking application with the Class 4 switch. And the Dialogic ControlSwitch can help. To find out more, contact us here.
Publish Date: August 16, 2016 5:00 AM
I live in a small town outside New York City in a very densely populated area. However, ever since I’ve had a mobile phone, I have had inconsistent mobile phone service from my wireless provider – Verizon Wireless. When I make phone calls from my home, often I have to stand out on the back yard and find just the right location to get decent service. And forget about LTE service.
I know for sure that Verizon Wireless is trying to rectify the service problems in our town by applying to install a network of antennae. However, they are being delayed by town politics and townspeople worrying about radiation issues and aesthetics.
Most of us in the town have learned to live with this poor level of service, and no one is happy. When we get together with neighbors, the discussion always gets around to bad mobile service. That is, who is providing the least worst level of service: Verizon, AT&T, Sprint, or T-Mobile. These discussions have been going on for years. Everyone is willing to switch to a new provider once a good solution is found.
In this environment, I decided to configure my iPhone for Voice over WiFi service. My carrier, Verizon Wireless, enabled Voice over WiFi on the latest release of my iPhone. I was hoping that the voice quality was better and that there would be no more dropped calls.
I’ve been very happy. The voice quality has been consistently good and I’m having no dropped calls.
The mobile carriers are making big investments in a range of mobile technology and solutions to keep up with surge in usage. See Jim Machi’s blog with commentary on the latest Cisco VNI forecast.
I’ve always been loyal to Verizon Wireless, and have had to settle for inconsistent service when home. However, the new Voice over WiFi capability is something I’m very impressed with. I think I’ll be with Verizon for a while now.
The neighbors and I will have to find something else to complain about around town now.
Publish Date: August 15, 2016 5:00 AM
It’s been quite a while since I wrote about HD voice. When HD voice was first coming to the market 7 or 8 years ago, Dialogic was pretty active in marketing it’s importance and talking about it. We knew that HD voice would be used when VoLTE was implemented. And we saw many cases where Value Added Services needed to be upgraded to insure that HD voice was carried over from just a phone call to the Value Added Service (for instance, voice mail) in question. It’s taken longer than many of us thought.
However, it’s here and fairly ubiquitious now. Apple supports it in the their new phones, WebRTC has this capability built in, and HD voice codecs are used in all OTT offerings such as Google Voice and Skype. I am finally now taking calls on HD voice. I know this simply because I can hear the difference.
It’s interesting how the marketing of HD voice is working with the wireless service providers. In the US at least, service providers are actively marketing it as a benefit of VoLTE. AT&T makes a big deal about it and so does Verizon. Sprint, somewhat less, but it’s on their website if you dig a little.
This may be the last blog I write about HD voice, simply because it’s now integrated into many different offerings. I would expect the marketing of HD voice to start to subside now – it’s not really a differentiator anymore, so why market it when it’s not differentiated? I’m sure there will be a “next thing” to talk about with voice. Apparently, Enhanced Voice Services (EVS) is a big deal. Whatever the “next thing” is, I’ll be writing about it.
Publish Date: August 9, 2016 5:00 AM
In the beginning of June, I did a talk at Bynet Expo titled “Powering the Agile Operator.” Part of the talk was about how NFV could help power the agile operator. As most readers know, NFV promises lower CAPEX, lower OPEX, and faster rollout of services. In other words, agility. As such, NFV is a key component of the “agile” strategy.
But why and how? To me, NFV is about a service provider obtaining the best of breed VNFs (Virtual Network Functions – basically software-based functionality that the network requires to do its job) for what the service provider is trying to do, and putting them together in their network. It borrows concepts from the IT domain where you get the best functionality and have them all work together. This best of breed software approach is new for service providers and will be a game changer in the ability to roll out services. Lower CAPEX and lower OPEX are certainly byproducts of that.
As such, the service provider is not tied to a single supplier, which means they’re not tied to their schedule, or at the whims of their fees for doing special upgrades. The service provider will have software-based choices. The service provider can put the network they want together from the VNFs. Of course, there is work to do regarding interoperability, but that will come.
The fact that software can work on existing hardware, software VNFs can be spun up and down as network needs dictate, and that this software can be developed independently from the hardware is a huge step forward in network agility.
Publish Date: August 2, 2016 5:00 AM
Network Functions Virtualization is more than just moving network infrastructure functionality to a virtualized environment. It’s about orchestrating, delivering, and managing end-to-end services by chaining together software-based network functions, and doing this in an automated and programmatic fashion.
The NFV list of benefits definitely includes lower CAPEX and OPEX, but more than that, NFV allows operators to be more flexible. It does this by enabling a framework for operators to build networks and roll out services at software speed. It also allows them to automate many tasks that in a traditional network architecture - with elements running on proprietary hardware - cost prohibitive and not feasible.
This infographic highlights some of the areas within Cloud/NFV environments where automation can play a role. For example:
- Onboarding a Virtualized Network Function (VNF) into a data center cloud
- Scaling an application by automatically reserving, configuring, and turning up virtual compute and storage capacity as well as loading additional network function resources into those virtual machines
So take a look at ways that NFV is defining standards within data center cloud environments to automate the network infrastructure cloud.
Download the PDF Version
Publish Date: August 1, 2016 5:00 AM
With legacy circuit-switched equipment rapidly reaching retirement age, telecom service providers around the globe are focused on evolving to all-IP networks utilizing SIP and Diameter signaling, and additionally, on migrating to virtualized and cloud-based platforms. With compelling advantages that range from opex and capex cost savings to multimedia applications and service portability, all-IP Next Gen and IMS networks certainly do represent a big leap forward from the installed base of circuit-switched networks that rely on Signaling System 7 (SS7) protocols and Signaling Transfer Point (STP) switches for end-to-end connectivity.
However, despite the many compelling advantages of all-IP broadband networks, it would be premature to declare SS7 and STP’s “dead”. In fact, SS7 signaling is still very much alive and kicking. So the question is: why does SS7 live on and remain relevant today? Well, one reason for SS7’s continued vitality is that it serves as a central nervous system for the PSTN and PLMN, and can therefore be relied upon to guarantee global connectivity for both voice and SMS services as the world gradually transitions to all-IP networks over the coming decade. A second reason for SS7’s on-going relevance can be attributed to the success of mobile Short Messaging Service (SMS) and the fact that it continues to live on despite fierce competition from OTT Instant Messaging alternatives. In fact, SMS text messaging usage has actually been aided in recent years by its integration into compelling new applications such as Visual IVR, mobile marketing, and mobile payments. Clearly, with demand for these sort of applications on the rise, network operators will want to ensure that their SS7 and STP infrastructures remain fully supported well into the foreseeable future.
While the SS7 protocol itself is unquestionable robust and readily able to address most of today's networking challenges, in too many cases the same cannot be said about the associated STP switching infrastructure, much of which tends to be extremely dated and deployed on proprietary TDM hardware platforms. Unavoidably, end-of-life STP switches supporting E1 and T1 spans today will likely require appliance-based replacement solutions; however, there are also STP’s dedicated to supporting SS7 over IP-based facilities (SS7 over IP is called SIGTRAN), and these particular STP’s could be candidates for virtualized STP solutions that offer opex and capex savings along with improved service agility.
For operators around the globe, legacy SS7 services are neither dying out nor being rendered defunct by IP messaging applications. Moreover, with the number of global STP suppliers now dwindling, the question being pondered by many network operators is not “when will SS7 be dead?”, but rather, “when should I replace my unsupported STP’s?”
With a focus on solving today’s challenges, Dialogic offers a suite of both virtualized and appliance-based signaling solutions. If an STP replacement might be in your future, we invite you to learn more by downloading the Dialogic DSI STP datasheet.
Publish Date: July 29, 2016 5:00 AM
You may not have, as I did, the luxury of choosing the array of hardware – rack-mount servers (RMS) or Blade servers - to be used for your OPNFV implementation. I’m going to assume you’re not rolling in a rack full of blades, fiber switch fabric, and storage arrays, but have to make do with what you hope are adequate 1U or 2U machines. I’ll mention later some of the interesting twists encountered with a blade setup, but let’s start with RMS’s in the blog.
First, how many do you really need for a minimal but usable deployment? Yes, it is possible to do everything on a single large system by putting the various OpenStack nodes on virtual machines (often Oracle VBox). But, this leads to extra network configuration as what are normally physical NICs will now be virtual NICs. And, worse yet, the machine that will run your guest VMs will itself be a virtual machine. This added layer of virtualization will not be a pleasant thing.
In my mind, a set of 4 bare metal systems is needed for a minimal OPNFV deployment. This will give you a Fuel master node to deploy from and a place to park all of the necessary roles required for OPNFV. What you will lose is some speed because you have jammed several functions on the same system, and you will sacrifice redundancy/high availability. In addition, this configuration only provides a single compute node. So, this node should at least be a sizable system. My compute has 8 CPUs and 32 GB of memory – enough to start up at least a handful of guest VMs.
Let’s look at the overall picture of the deployment. Note that this shows only the RMS side of things, although the virtual Fuel machine is there.
Here are some important points:
- Machines need a minimum of 2 NICs. The NICs (or at least one NIC) must support VLANs.
- 1000BT NICs are the absolute minimum. 10 GB/s NICs would not be overkill! Our blade server deployment uses 10 GB/s and is noticeably faster.
- All internal OpenStack networks coexist on a single physical network by means of VLANs. The external/public network is also on a VLAN.
- The PXE boot network must be a flat (non-VLAN) network.
- Gateways in the ToR switch route traffic to other machines in the development lab environment, general corporate network, and public internet.
- Corporate and lab DHCP must be segregated from OPNFV/Fuel/OpenStack. DHCP servers are automatically set up there as needed. If there are conflicting DHCP servers, you will be in trouble.
If you are not huddled next to your stack of systems, you will likely wish you had viable remote consoles. Unless remote management capabilities are built into the server, this would be through an Ethernet-enabled KVM switch. You will probably need to change BIOS settings, check the progress of PXE boots and get to the systems when networked ssh is not functioning for reasons unknown.
Let’s move on to the Fuel setup. As a quick refresher, Fuel is an installer for your OpenStack/OPNFV environment that provides a GUI to lead you through the OpenStack configuration (If you need a refresher, here’s the link to an earlier blog). As I have mentioned, the OPNFV Fuel Installation Guide is quite comprehensive, laid out in cookbook fashion so that it can be followed from start to finish. So, nothing will be gained by rehashing it. What I will do is point out some “gotchas” that got me in the hope that they won’t get you.
Fuel setup tips:
- FUEL menu is kind of fidgety, especially with a remote console. Make sure the console is big enough, or strange behavior will occur. Status/error line can be lost, labels truncated and tabbing between fields may not work correctly. Make sure you have a full 80 character width and that you see the full menu:
- Most screens in Fuel have a “verification” function. Always give this a try before saving the values for the screen.
- Remember that there is a running system behind the Fuel menu. You can drop into it to verify things on the underlying system (most importantly, network connectivity) using “Shell Login.”
- The Fuel menu can be run after the installation to easily check on Fuel settings – “fuelmenu” will start it up. Remember to Quit Setup. Quit without saving so that it will not try and make any changes.
- I had trouble getting to the outside world for Ubuntu and OpenStack repositories and thus passing network verification. I set “Skip building bootstrap image” in the Bootstrap Image screen and set up the repositories later, before deploying.
- While it’s possible to set the default gateway on the screen for either network, you will likely only set it as part of the non-PXE network.
- Fuel on a virtual machine – worth the effort if you are using multiple deployments. While multiple environments within a single Fuel are possible, separate Fuels are more flexible, especially in a “mixed” (Blade and RMS) environment. “Bridged” networking devices should be used, as they explicitly map a VM’s virtual network device to a physical device on the host. This is important, as one of the physical networks is flat and used for PXE booting and the other is divided up into VLANs. Also, check closely to make sure the physical device you expect is connected to the bridge device you expect. Ethernet naming conventions can be confusing.
- Be patient once you start the Fuel setup. It can take a while. When finished - if you can get to the Fuel GUI - all is well.
- When booting the OpenStack nodes to make them available for Fuel, it is helpful to watch what happens on the console. If your PXE boot is not working, you may need to drop back into “legacy” boot mode instead of UEFI. I found I had to do that. This will depend on your servers. When the PXE boot works properly, OpenStack nodes will magically appear as ready-to-deploy in Fuel.
- If “Skip building bootstrap image” was chosen, you will receive a suggestion to set up a local repository on the newly-installed fuel node, using “fuel-createmirror.” Follow this suggestion, as this will replace the repositories you were not able to install as part of the Fuel node installation. This will conveniently update all of the URLs in Settings/General/Repositories screen.
As with the Fuel installation, the OPNFV Fuel Installation Guide does a good job of laying out the steps of setting up for and doing the OpenStack deployment using the Fuel GUI. However, there are some traps you can possibly fall into along the way. My next blog entry on my OPNFV journey continues with exposing some of those traps as well as highlighting some additional points to think about in deploying your OPNFV platform.
Publish Date: July 28, 2016 5:00 AM
This infographic is your GPS to help you map out the route to an evolved IPX on which to offer revenue-ready and customized voice and video applications along with advanced interconnection capabilities. Wholesale carriers and IPX operators can leverage their position as a trusted intermediary with MNOs and fixed line operators to provide secure any-to-any mediation and transcoding services for enabling VoLTE/IMS connectivity.
Download the PDF Version
Publish Date: July 28, 2016 5:00 AM
In an IoT world, you’ll have a connected car. Your connected car has all kinds of sensors to detect when to brake, how your car is performing, etc., and a connected speaker that will communicate with you. If your car stops suddenly and no brakes were applied, it might mean you got into a crash. If you have the right app, you might find your car talking to you to see if you are OK. Your talking car will most likely be your vehicle assistance company or an emergency services drone hovering above connected to your car speaker. All the wearables you have on you will also be measuring and probing your health. Something might indicate to the right application that you have some kind of problem. And if you did, you might also hear someone talking to you asking if you need assistance. Video would be involved as well because cameras will be everywhere. This is an example of a more complicated instance. But IoT will be used in very simple functions as well.
If a sensor picked up some anomaly, such as low water pressure, or high isolated temperature, someone could look at a camera first to see if there is an actual issue. This would all save time and expenses. These are some examples I can think of for the marriage of IoT and voice communications, but I’m sure there are literally thousands, if not hundreds of thousands more use cases involving voice and video.
These examples show that the communications industry can and will play in the IoT market with the marriage of real time communications and IoT. It will just be different than what built all of our companies today. The main app will not be person-to-person communication like we are used to. The value of IoT to our companies is tremendous, and because of IoT, the voice/video/messaging part we know well will be a part of a larger application story.
Publish Date: July 19, 2016 5:00 AM
Before we can look at the components of OPNFV, let’s start back with a definition from my last blog. “OPNFV is basically an Openstack deployment framework, with emphasis on the networking side.“ This means that we have at least 3 pieces here – Openstack itself, a way to deploy it, and some networking “extras.” That might lead you to ask – “Well, why don’t I just use Openstack, if that’s at the core of things?”
You could, but remember my warning about re-inventing the wheel. Wouldn’t it be way better to get everything you need to deploy an Openstack cloud in a single package, with good instructions, and a community willing to answer the inevitable questions? Oh, yes it would…
Let’s look at what we will be working with. For those who are not used to dealing with the innards of a cloud, it will help to think in terms of “layers.” Some of these layers are real, and some are virtual. Some allow you to install, deploy, and administer Openstack, and some do the actual work that everything else is there to support – running your application in a virtualized networking environment.
It’s probably best to start by reviewing the layers, at a high level, in the order in which you will be dealing with them. Here they are:
The Openstack deployment controller or master node - It is from here that all good things will spring. Once installed, the master will control how your Openstack environment will both be initially configured, and, how it will be enhanced as time goes on. As more users are added, they will need more image storage and more compute nodes on which to do their work. A deployment controller will allow these additions to be easily done.
The Openstack nodes themselves - There are many different types of nodes that can come into play in Openstack, but not all are needed in every deployment. With OPNFV, there are a set of core nodes that are needed, and a few others that may be deployed if desired. Narrowing down the possibilities helps to alleviate the information overload that comes with figuring out how you want to use something new and complex.
The guest virtual machines running in the Openstack environment. These would be the reason for doing all this in the first place – a set of virtual machines that may be assigned to different users, brought up and down at will, and put into a multitude of virtual network configurations.
Now we can get into specifics. In doing so, I will be telling you what worked for me, and why I made the choices I did. They may not be exactly what you want or need. But, hey, it’s my blog…
FUEL – My Deployment Environment of Choice
There are no less than 4 different installers available with OPNFV. After some experimenting, I chose Fuel. Here are my reasons:
- It has its own semi-automated setup and install program. And, yes, an installer for an installer is not a bad idea.
- Fuel itself uses a well thought-out web GIU to lead you through Openstack configuration.
- The Fuel installer and web GUI both allow you to test/verify your setup choices in many different points throughout the entire process. This is essential. A mistyped IP address or other mistakes can lead to cascading errors a step or two down the road. And you will have no idea what you did or when you did it - unless you are particularly adept at trolling through dozens of strange log files for oddly named things whose function is a complete mystery.
- Fuel supports virtual LANs (VLANs). An Openstack deployment uses many networks, which may be either physical or virtual. You may have an unlimited budget for network cards for your conventional systems or fabric switches for your blade center. In which case, don’t bother with VLANs. But otherwise, you need to be able to divide up a physical network into discreet, virtual networks so they can carry out their Openstack jobs without bumping into one another.
I previously mentioned that I have set up and plan to maintain two separate OPNFV deployments – one for experimental and development purposes, and a second for internal QA, testing, and demos. One is a set of rack mount servers, the other an HPE Bladecenter. As these are two unlike environments, I chose to use two separate installations of Fuel to deploy the two separate OPNFVs. But I was able to save the price of a system by doing the two Fuel master nodes as virtual machines. Overall, Fuel does a lot of sitting around. It’s only hard at work when called on to deploy Openstack. So, the two virtual Fuel nodes on a single modest CentOS 7 host had more than enough resources when each needed them.
Openstack Nodes and Roles
Let’s now take a look at the Openstack nodes and roles that are part of the OPNFV deployment. The “nodes” here are the physical machines themselves, while the “roles” are the necessary Openstack services or functions that must be present to get to a working deployment. And to make matters more interesting, there are many possibilities for assigning roles to nodes.
Openstack Roles – there is a fairly long list of roles shown by Fuel, but only some are relevant for OPNFV. These most important of these are:
- Controller – the brains of the Openstack cloud. It provides a user interface (graphical or command line) for the many functions involved in cloud operation. The front ends for all of the other components are also found here. They talk via sockets to their counterparts on other nodes, where the actual work is performed.
- Compute – the brawn of the Openstack cloud. The guest virtual machines are created, run, and destroyed here.
- Networking – your main networking needs in Openstack today are fulfilled by the Neutron project. It provides its own API to tie together network-related functions across other Openstack components. With OPNFV, there are Software Defined Networking (SDN) network plugin options below Neutron that may be selected. For the current release they would be OpenDaylight and ONOS. Operating at the Neutron level, you would, in most cases, be unaware of what happens via the plugin.
- Storage – “Ceph” is an object storage platform. It may span several nodes, and is used for several kinds of offline storage such as virtual machine images, files and block storage volumes. It may replicate its objects to avoid a single point of failure.
- Telemetry – “Ceilometer” uses MongoDB to keep track of cloud usage, performance statistics and can be used to store data specific to application virtual machines as well.
Now, how does OPNFV suggest deploying the system? It’s not always clear – how many boxes do you really need, how are functions best divided up among them? Often, you will want to know the minimum needed, as few people seem to have closets full of up-to-date servers with good virtualization support. Things that need to be taken into account to arrive at the ”right” (for you, anyway) answers include:
- Number of potential guest VMs expected to be in use.
- What are the guest nodes going to be doing? Are they compute, storage, or network intensive? This will influence the choice of CPUs/cores, available memory and disk storage on the compute nodes. Remember - the compute nodes are where you want to spend your money!
- High availability, fault tolerance and disaster recovery. Is this important? For experimental or development systems, not so much. For production systems, very much so. A minimum of 3 duplicate systems are need for true HA. But I think here we are all at the “get your feet wet” stage here, so this will not be so important. HA configuration is also something I have yet to tackle. It may be sufficient to simply do a nightly backup of development materials – scripts, configuration files, images and snapshots - so they don’t inadvertently disappear.
I realize I’ve given you no hard recommendations here yet. Things that worked well for me will be revealed when I get into more of the details in subsequent blogs. The next blog entry will be relatively short – how to get prepared before actually installing OPNFV. Then we’ll get down to the meat of things.
This post is a part of the "OPNFV Demystified" blog series. The next part of the blog series will be posted on Thursday morning each week. Check out the intro post, if you missed it.
- The Components of OPNFV
- Getting started with OPNFV – How Should I Prepare?
- Implementing OPNVF in the Real World, Part One
- Implementing OPNVF in the Real World, Part Two
- The challenges of Networking in OpenStack
- Why Automation Is Key to Your OPNFV Deployment
Publish Date: July 14, 2016 5:00 AM
In March, Chetan Sharma released a 2015 US Mobile Market update report. There are many interesting points in this report, but one thing I want to talk about today is that voice revenues declined by 24% and messaging revenues declined by 18%. Wow, that’s a lot. And since the overall mobile market increased in the US, that means data service revenue is increasing quite a bit. Is that a concern or not?
Years ago, clearly this was a concern. No one wanted to just be a bit pipe. It’s pretty well understood and accepted now that in order for that not to be the case, then value added services on top of the data pipe will need to be utilized. So the concept of the connected car, smart city, smart home, etc. is where the future growth is. All of these concepts require data and are value added services on top of it, but do not utilize messaging or voice.
But there are huge opportunities for messaging or voice innovation here on top of these new services. The application or service doesn’t have to revolve around voice or text as in the past, but voice, video or text can be an integral / optional part of one of the new IoT type of services. That means WebRTC and media servers can play a huge role in some of these new innovative services going forward.
Next week I’ll start a 3 part blog series about IoT and real-time communications to explore this concept a bit more.
Publish Date: June 28, 2016 5:00 AM
SaaS, IaaS, PaaS, NaaS, MaaS, UCaaS are all examples of XaaS. According to TechTarget, XaaS is a collective term said to stand for “anything as a service.” The acronym refers to an increasing number of services that are delivered over the Internet, on-demand, and on a subscription basis. XaaS is the essence of cloud computing. For the readers of this blog, especially network operators, UcaaS – Unified Communications as a Service – should be the acronym of interest, especially since it is forecasted to be a $37.85 billion dollar market by 2022 according to Transparency Market Research.
No need to dwell on the well-known fact that network operators are losing ground to services like OTT. Let’s focus on the positive and point out that network operators are in an ideal position to offer UcaaS because of their infrastructure – Hosted PBX and SIP Trucking to their business customers, and basic/advanced Class 5 services to their residential customers. Not to mention more revenue-generating services such as hosted contact centers, network auto attendant, corporate ACD, outbound SMS, and calling, WebRTC video conferencing, and call interception, just to name a few.
An old article I recently re-read covered a lot of the benefits of UcaaS succinctly. According to the article, UcaaSoffers flexibility and expandability that small and medium-sized business could not otherwise afford, allowing for the addition of devices, modes or coverage on demand. The network capacity and feature set can be changed from day to day if necessary so that functionality keeps pace with demand and resources are not wasted. There is no risk of the system becoming obsolete and requiring periodic major upgrades or replacement.
Let’s not forget a single bill that consolidates telecoms services.
Off-loading communications needs from on-premise to the cloud is a big step for any enterprise, so once onboard, network operators have a captive audience which must continuously be offered new services. This in turn directly affects ARPU positively.
What makes UcaaS such a great business proposition for Network Operators is that the infrastructure is already in place. Mostly what is needed now are the applications. Once again… network operators are ideally positioned to advantage of this rapidly growing UcaaS trend with their existing infrastructure which most OTT players are unable to do.
As the US President Theodore Roosevelt said, “Do what you can, with what you have, where you are.”
Publish Date: June 24, 2016 5:00 AM