Cookie Preference Centre

Your Privacy
Strictly Necessary Cookies
Performance Cookies
Functional Cookies
Targeting Cookies

Your Privacy

When you visit any web site, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, your device or used to make the site work as you expect it to. The information does not usually identify you directly, but it can give you a more personalized web experience. You can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, you should know that blocking some types of cookies may impact your experience on the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site may not work then.

Cookies used

ContactCenterWorld.com

Performance Cookies

These cookies allow us to count visits and traffic sources, so we can measure and improve the performance of our site. They help us know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies, we will not know when you have visited our site.

Cookies used

Google Analytics

Functional Cookies

These cookies allow the provision of enhance functionality and personalization, such as videos and live chats. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies, then some or all of these functionalities may not function properly.

Cookies used

Twitter

Facebook

LinkedIn

Targeting Cookies

These cookies are set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant ads on other sites. They work by uniquely identifying your browser and device. If you do not allow these cookies, you will not experience our targeted advertising across different websites.

Cookies used

LinkedIn

This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional and marketing efforts, and provide content from third parties

OK
Become a Basic Member for free. Click Here

J S - ContactCenterWorld.com Blog

High availability - seeing the bigger picture

For the large scale, distributed call center, or hosted/ cloud based service provider, every minute of unscheduled outage costs money in agent wages and missed opportunities. This drives a search for the Holy Grail of software that delivers very high uptime - maybe 99.999%.

Even if we believe the hype of some vendors, if we look at the bigger picture, we see that any such claims refer to software only. But in the new world of virtual, hosted and cloud-based contact centers, the vast majority of downtime is caused by hardware and network failure. And in real-life (i.e. outside of the Marketing Dept.) no-one is interested in separating out the causes of failure. Downtime is downtime, whatever the cause. So the question end users should be asking software vendors is not “How many 9’s?” but “How well does your software cope with the inevitable infrastructure failures?”

Let’s look at the areas of risk and how call center software vendors can (and should) respond to them.

Risk 1 - software failure

Yes, software sometimes fails. Call center software does not operate in a sealed, air-tight, proprietary environment as it once did; IP-based technology is open to the elements, and subject to all the outages and problems that come with distributed components and high-volume network usage.

But software vendors can employ several methods to minimise the impact of software failure:

Auto failover – if for some reason the software meets a condition with which it cannot cope, it should restart automatically and carry on where it left off, with minimal impact, and no manual intervention.

Segmentation – dividing the product into smaller and smaller discrete chunks so that the impact of any failure can be minimised.

Clustering – spreading the processing load between many instances, with automatic load balancing. This allows for the failure of one unit with minimal impact on the system

Risk 2 – hardware/ network failure

Hold on. Why should software vendors like Sytel be concerned about conditions beyond their control? Because any responsible vendor aims to make life as productive and trouble-free as possible for the end-user. This means providing backup and redundancy capabilities to mitigate against failures outside of the software.

As with back-up generators that take the strain after a power outage, so each component in the chain – from servers and power supplies, to voice and data networks – should have a back-up ready to take over in the event of hardware or network outage. In the case of servers/ virtual machines, this doesn’t necessarily mean ‘1 for 1’ redundancy. As it is unlikely that more than one would fail at one time, a single back-up could be configured to take over for a number of different primaries.

In order to take over automatically, any back-up must be constantly maintained to mirror the state of the primary server. This is where good software design comes in, providing a mechanism which will notice when the primary is no longer in service, and immediately kick-start the back-up.

Be prepared to ask hard questions of your supplier, like “What happens if an earthquake hits?” If the answer is “Not our problem!” you might look for a vendor that is willing to try harder and consider the bigger picture.

Michael McKinlay - CEO, Sytel Limited

Publish Date: September 27, 2012 11:18 AM


Drinking from the fire hose - The challenge of 'big data' in the contact center

As more contact centers escape the confines of physical location and become virtual/ cloud-based, the customer interaction data produced can start to increase. This has the potential to grow massively, even exponentially, in a short time, and is virtually unlimited. This phenomenon has been dubbed 'big data' and if it hasn’t reached you yet, it may be closer than you think.

Managers must carefully consider how to cope with, and extract value from, this big data load; high performance tools are needed that, as some have put it, can drink from the fire hose.

Infinite Scalability

In the good old days, call center systems were largely finite; 'X' number of agents over 'Y' period will produce 'Z' Specific Data Items creating 'V' volume of data. X and Y could be predicted and Z was static, so V could be calculated fairly accurately, and storage and retrieval systems for that data could be built accordingly.

In the new world of cloud/ hosted contact centers, systems have become almost infinitely elastic and therefore inherently unpredictable. As the variables increase within these systems, data becomes 'big data'. For example:

  • Number of agents
    Hosted/cloud systems provide ease of scalability and agent numbers will increase/ decrease as demand dictates. 100 today might be many times that tomorrow.
  • Specific data items
    Systems once only collected known KPIs – talk time, wrap time, per call, etc. The new world calls for infinite extensibility, even the definition of new metrics, and certainly the ability to customise and extend.

The period a report could cover has not changed and must still be configurable, spanning anything from minutes to years.

Tools that can Cope

The speed of storage and retrieval of high data volumes is a major issue. Traditional SQL databases do not lend themselves to ultra-fast read/ write, so users are looking elsewhere.

The breed of tools known as 'noSQL' (e.g. mongoDB) is built to deliver in this environment. These maintain many of the familiar SQL features, and facilitate speed by adding some unique functionality, e.g. the ability to process many transactions simultaneously via contention-free, non-locking updates; fast handling of unstructured data in a file system rather than a database.

But capture of big data is only one side of the coin. The other is that the data is practically useless unless it can be ordered, analysed and processed. Only then can comparisons be made, patterns emerge and big data become business data.

In order to start making sense of the data explosion, the right tools are needed and are coming on stream. In order to aid the speed at which meaningful results can be produced, many of these employ several methods of data aggregation.

  1. Real-time aggregation 
    With this method, real-time data is used to update counts of events – e.g. live connects, abandoned calls, etc – and other calculated metrics – e.g. average talk time. Running totals can then be used in reports without any need for further processing, database interaction, etc., producing results much faster than otherwise possible. This method also reduces the need for the more time-consuming periodic aggregations below.
  2. Periodic aggregation (MapReduce)
    Periodic aggregation (a.k.a. MapReduce) is commonly available off the shelf within noSQL systems. Using this method, the work of aggregation is carried out simultaneously by many processors, maybe within the same server, maybe in a widely distributed cluster. The results are then fed back to a master process which presents the results as output to a reporting system or writes to a database. As this method is more processor intensive, it cannot be done in real-time with big data.

Using a combination of real-time and periodic processes, various levels of aggregation are possible, depending on the view of the data required, e.g. an hourly report, weekly, monthly, or even yearly. As resolution zooms out, the level of detail required reduces and a higher level of aggregation is possible. As the metrics are already available, fast load time is maintained. Of course, no two users have the same requirements, so the levels of aggregation must be customisable.

But care must be taken in order to maintain accuracy when the user needs it. Aggregation must occur alongside the capture of individual interaction events, so that data is not lost and then has to be approximated.

Maybe you have not had to make any decisions about a 'big data' strategy yet; the fire hose may be safely trickling right now. But we suspect that any call center of 100+ agent seats will have to deal with this pretty soon. When the time comes, make sure you choose technology that is flexible enough to cope with future demand, so you will be waving, not drowning, when the fire hose is opened up.

Sytel Limited - contact center software & solutions

Publish Date: September 17, 2012 10:57 AM


The pain of separation - Call flow scripting for IVR and live agents

Contact centers with a need to develop call scripts/ call flows for both IVR and live agents, really have just 2 possibilities:

  1. In-house IT resources – for script development in C# or HTML; usually only available to larger organisations, but often very expensive in terms of time and specialist resources
  2. Script design products - for script development by non-programmers

Some have access to both, and find that design products offer a way to substantially cut the time, effort and cost involved in script development, even if they enhance further with in-house resources. Many others rely on design tools only.

But such call center scripting tools are often built either for IVR only, or for agent scripts only, and are entirely separate entities. We see much frustration in the marketplace over how productivity is hindered by having to use a variety of tools to design and implement standard services.

Why is it so difficult? After all, in practice, calls travel between IVR and real agents all the time: a caller might navigate an IVR menu to get to an agent, who sends him to an automated payment script, which returns him to the agent. Simple!

But in order to make that flow happen, there could be a great deal of pain behind the scenes. Anyone who has ever had to design an IVR logic flow will testify to how time-consuming it can be.

There is no shortage of tools to develop IVR flows. Most of them provide integration with a PBX and/ or database. But because of the complexity, many of these tools come with an offer to buy in 3rd party script writing expertise.

There are also plenty of development products out there for building scripts for live agents. Or maybe MS Word has served you well for this over many years.

But by and large IVR and agent scripts are not built using the same tool. Why not? Surely they are fundamentally the same thing: routing and branching decisions based on user input. One reason may be that specialist agent scripting products don’t require inbuilt telephony integration (e.g. DTMF capture), and therefore many don’t offer it. With IVR, on the other hand, CTI is a basic necessity.

The cost of using separate tools is multiplied because they require integration, often a painful business, and the result often feels held together with string and sticky tape. A great deal of time and money can be wasted trying to force disparate applications to play nicely together.

The magic bullet would be a single environment - a Grand Unified Theory - for development and delivery of both agent and IVR scripts, that would

  • handle logic flows for both IVR and agent scripts
  • allow rapid development by non-programmers
  • integrate fully with PBX telephony functions
  • allow seamless transfer between IVR and live agents
  • handle both development and deployment

If you are frustrated with the number of tools your team has to work with, or the shakiness with which they are held together, be more demanding of your supplier, or contact Sytel to point you in the right direction.

Publish Date: August 17, 2012 9:58 AM


Closed vs. open - 2 roads to market

Sytel has identified two laws of marketing! We can't be sure but we suspect that most companies fit into one of two camps:

1) Closed architecture mode (locked-down!)

If you are a major player in any market - and we do mean major - then our first law of marketing says that you grow as follows:

  1. get yourself a well-recognised brand
  2. preach the gospel of big brand security
  3. leverage your brand by marketing related products and services
  4. make it clear to your customers that they should buy the new products and services you offer, not just because of brand security, but because... well, if you don't, then things won't work properly.

Point iv. is key, and one that we are all easily seduced by. Your writer speaks as someone who has been rummaging through his local DIY store looking for an "approved part" for his grass trimmer. Non-brand ones look OK and are cheaper, but do I want to take the risk?

2) Open architecture mode

If you are not a really major player, then there may be nothing to stop you establishing a good brand name for yourself, but that may not be enough to win over some of the customers you would like to tackle.

You have a great product. What do you do? Well our second law of marketing is of course radically different. You still work hard at differentiators that make your product stand out. But instead of excluding other vendors, you welcome them with open arms.

Here's how:

  1. remember that no vendor can ever hope to provide best of breed for all customer requirements; so be prepared to welcome best of breed partners on board
  2. provide open application programming interfaces (open APIs); not just one but probably several, to cater for all preferences
  3. since your product is going to be be integrated with other product, make sure that a support process is defined that makes trouble shooting easy; not just in theory, but in practice
  4. go out of your way to offer integration help to other vendors, and don't think you always have to charge for it

Most worthwhile customers are risk-averse. If you can follow guidelines like this and make a virtue of them, you might be pleasantly surprised at the reception you get in the marketplace.

And in doing so, spare a thought for the followers of our first law of marketing! When the accent is on brand management and market control then innovation suffers. And eventually innovation is of course the only thing that matters.

Michael McKinlay - CEO, Sytel Limited

Publish Date: July 17, 2012 9:55 AM


Network Answering Machine Detection - myth or reality?

It has become traditional around this time of year for Sytel to comment on the ongoing debate over answering machine detection. Does AMD deliver real benefits to the call center? What about ‘false positives’? For some background, read our previous posts from May 2010 and May 2011.

The latest controversy is over Network Answer Machine Detection (i.e. AMD based on ISDN signaling, rather than sound patterns). Before we give our view on this, let’s review the context.

The standard method used by most vendors for many years is the cadence method. This is the analysis of bursts of volume (speech) interspersed with silence. One sample could be identified as the “Hello?” of a live person; another, the “Hello, we are not in right now...” of an answering machine/ voicemail service.

This method produces reliability of around 85%-90%. (Any claims from vendors to go above this are either based on wrong measurement, or biased sampling.) You may think that sounds quite good, but this means that 10% of answer machines will get put through to agents as live calls, and also crucially that 10% of the calls classified as machines are in fact live respondents, and hung up on. These are known as ‘false positives’, and here’s the problem. Ofcom regulation in the UK stipulates that a ‘reasoned estimate’ of these must be included in the calculation for abandoned calls. Under normal conditions, a ‘best case’ estimate of 90% reliability already pushes the abandoned call rate beyond Ofcom’s regulatory limit of 3% of all live calls. If you run AMD in the UK, you can say goodbye to any benefit from your predictive dialer.

As a result of this dilemma, many vendors have sought alternative methods of doing AMD in order to keep the practice alive. Two themes are recurrent; Network Answer Machine Detection and byte pattern recognition.

  1. Network Answering Machine Detection
    In our (informed) view, any claims to use AMD based on ISDN signaling are fatuous. It does not exist. If it did there would be no debate, everyone would be using it, detection rates would be 100% accurate and Ofcom's job would be made a lot easier.

  2. Byte pattern recognition
    This works by picking up encoded audio which follows defined sequences of bytes. This technique works but has very limited application. There is no such thing as a standard network voicemail message. Networks must offer consumer choice. Some consumers customise, some simply switch voicemail off and some networks have equipment that does not encode voicemail speech consistently. The result of this is that network voicemail can be reliably detected using byte pattern recognition in around 15% of cases in the UK.

The inescapable conclusion is (still) that AMD in the UK using the methods described above simply cannot deliver the low levels of nuisance calls that Ofcom rightly mandates. If anyone knows of other methods that are more accurate we’d love to hear about it.

If you are outside the UK and still convinced that you need AMD, then Sytel’s is a good as any. But our advice remains that use of AMD is bad practice, leading to bad customer relations and lower call center profits. As Sytel has always said: just switch it off.

For a more in-depth analysis, ask for a copy of our white paper on The Science Behind Answering Machine Detection.

Michael McKinlay - CEO, Sytel Limited

Publish Date: July 17, 2012 9:42 AM


Keeping cloud services on tap

Handling failover in cloud-based/ hosted call center applications

This month we look at failover, one of the key pillars of delivering high availability cloud-based or hosted contact center services. In particular, how do you prevent failure and disaster from becoming loss of service? And what questions should users/ service providers ask of vendors such as ourselves?

We in the developed world are blessed with utilities on constant supply; clean water, electricity, an Internet connection. And we are surprised and outraged at the inconvenience caused when one of these systems is interrupted: no cup of tea, no light, no instant connectivity. Horrors!

For the call center, interruption of core cloud/ hosted services such as IP bandwidth, telephony, call control, etc, is more than inconvenience; it can mean bad customer service, loss of revenue and loss of reputation.

Although 100% uptime is the ideal, the challenge of real-time processing in the call center makes this impossible. Why? Read on.

Even apart from this, the reality is that without a Department-of-Defence-sized budget, the most users can expect is ultra-high uptime. Scheduled replacement, failure of hardware, network, power, voice carrier, etc, can all contribute.

From a software perspective, downtime is usually caused by some form of outage:

  • planned – If a software platform is not designed to be upgraded on the fly, upgrades can cost minutes, even hours; not good for a ‘high availability’ system.
  • unplanned – the result of a failure somewhere in the system; these can be foreseen e.g. lack of resources (memory, disk space, etc). With careful planning and appropriate system monitoring, this can be eliminated. Others can be unforeseen but are inevitable and can come on any scale, from individual component level (e.g. hard disk, network switch, media gateway) to major disasters (e.g. earthquake, tsunami)

The central question for any vendor/ service provider offering high availability is: how do you prevent failure and disaster from becoming loss of service?

The key is to eliminate ‘single point of failure’ by duplication/ replication of services, a.k.a. software redundancy. But this comes with its own challenges.

The ideal is that every service has a ‘hot standby’ – a secondary service that is constantly running and mirrors the state of the primary. On failure, all dependencies and resources are seamlessly switched over. This is the basis of the worldwide web, and other carrier networks. But while this works for many processes in the call center, it cannot work for real-time processing (e.g. conferencing/ recording of voice traffic, or dialer/ ACD pacing). Being real-time, the state of each changes too fast to make persisting to disk practical. So if a processing service fails, resources cannot simply be switched and normal service resumed. There will be some temporary degradation of service as current sessions end, and have to be re-established by the back-up system, or as the backup dialer service gets up to speed.

The alternative is ‘cold standby’. In this model, a copy of each service is kept on a separate system (maybe a VM, a different server, even on a different continent) ready to be brought into service when necessary.

But how do we know when this is necessary? For high availability, waiting for someone to notice a failure is not good enough. Action must be taken immediately and automatically. This requires a monitoring service continually asking surrounding services “Are you alive?” If the expected answer “Yes” does not come, a control service is also required to tell the secondary service to start. (Incidentally, each service must also have a back-up. As Juvenal asked: “Who watches the watchers?”)

Another challenge is that the primary will have been in a particular state when it failed. The secondary must be initialised using the same settings, including any security and licensing. This could come from a duplicate configuration file on the secondary server, or in the cloud. It must also be made ready, perhaps from an up-to-date ‘current status’ file.

Finally, all active resources and routing must be switched to the secondary.

After a smooth handover, what happens to the primary? If the cause of the failure was a transient glitch, auto-restart would be best, reprovisioning and reconnecting resources to bring itself back into service. If not, the IT department will be getting their hands dirty.


High availability for hosted/ cloud-based call center services cannot be taken for granted. Support for failover must be designed into the software at a deep level. It must be planned for and worked toward, so that service is as seamless as possible.

Next time you turn on the water tap, remember to count your blessings and thank the Romans for pioneering a water system with ultra-high availability.

Michael McKinlay - CEO, Sytel Limited

Publish Date: July 16, 2012 4:02 PM


Can you spare a minute?

Delivering high availability for the call center

As delivery of cloud computing services threatens to become a way of life, it is timely to look at where vendors like ourselves should be concentrating their efforts to ensure very high uptime, or if you like, to ensure that outage over long periods of time is measured in seconds and minutes only.

The main job of a software vendor is of course to design software that not only can be deployed fully redundant but is also engineered well in the first place, so that it doesn’t fail.

But from a user’s point of view, there are a number of other things that also need to be on place to ensure very high up time. These include:

  • a ‘no single point of failure’ (redundant) hardware architecture
  • connections to multiple bearer networks
  • appropriate levels of system monitoring
  • strict change control

Without these features, even the best call center software in the world may simply fail to deliver. But let’s focus on what software vendors can do.

Outage can cost not only revenue but reputation damage, too. If the outage is caused by a software component within your platform, even a fully redundant architecture will not protect you.

Here are two complementary software approaches to minimising down-time:

  1. Process separation
    i.e. dividing server-side components into discrete services, each delivering a specific function. This minimises complexity of components, ring-fences failure-prone operations and therefore minimises failure rates and failure cost.

    One application of this might be a database proxy that contains code to manage database transaction failure. It publishes interfaces so that other services can take advantage of its capabilities. This means that there is only one application that has to implement complex code for managing database transaction failure but its capabilities can be used by other applications.

  2. Multiplexing
    i.e. running one activity across several physical processes. This could be across multiple discrete instances (multi-instancing), or multiple connected instances (clustering):

    a) Multi-instancing
    Multi-instancing allows software components to be installed multiple times on the same computer, allowing each component to operate simultaneously but independently; or allowing different types of data to be associated with different instances, e.g. tenant data in a multi-tenant environment.

    By multi-instancing, the load on each service is reduced, and the likelihood and cost of failure is reduced.

    b) Clustering
    A cluster consists of a set of connected servers (physical or virtual) that work together so that in many respects they can be viewed as a single system.

    Performance, capacity and availability can be scaled up across multiple systems at a fraction of the cost it would take to achieve in a single system.

    Further protection can be provided by making one node of the cluster redundant, and maintained as a ‘hot standby’. This entails deploying N+1 servers to deliver N servers worth of capacity. Deploying a redundant cluster requires the running of a central control process, or script, for making bridging decisions.

High availability is a must-have for cloud deployments, and with the right design and some careful planning, it is certainly achievable (without astronomical expense).

Stay tuned as we look in more depth at the detail of delivering high availability in future blogs.

  Michael McKinlay - CEO, Sytel Limited

Publish Date: July 16, 2012 3:56 PM


How to Juggle with 4 Balls (and a Chainsaw) – Part 2

The Challenge of Converged Multi-Media in the Contact Center

Last time, we looked at how choice of technology in a multi-media contact center can enable speed of response. This time, we address the other requirement of customer satisfaction: quality of response.

Contact center agents are empowered to provide excellent quality of response by continual evaluation, training and improvement. The tools that enable this to happen are standard for voice-only interactions, but how about web chat, email, SMS, video and others? Rather like juggling with 4 balls (and a chainsaw), this is a challenge.

Multi-media Monitoring

Call center agent monitoring is essential for maintaining quality of service. It allows supervisors to check that agents follow established protocols and procedures, and reveals areas that need further training.

Just as supervisors can monitor, coach and barge with voice calls, the same is required with email, chat and other media types. Whereas voice calls just require an audio connection, the challenge for other media types is to also provide access to the agent screen, at several levels:

SMS, email, web chat, etc (text-based interactions)

  • Monitoring – requires view-only screen
  • Coaching - requires sharing mouse and keyboard with agent/ view-only screen plus audio
  • Barging – requires taking over mouse and keyboard from agent

Desktop sharing and take-over requires either built-in monitoring software or integration with third party remote monitoring tools.

Video

This requires special handling (rather like that chainsaw):

  • Monitoring – requires view-only screen
  • Coaching - requires view-only screen plus audio
  • Barging – requires software that enables the supervisor to replace both agent-side audio and video with their own.

Supervisors can only continue to improve agent performance and satisfy customers’ needs if they have access to the right tools.

Multi-media Recording

This is typically useful for agent scoring, post-interaction coaching, complaint review and dispute resolution.

Whereas voice call recording involves creating MP3s, the challenge is to be able to handle other media types appropriately:

  • SMS, email, web chat – if there is a unified queue mechanism, simple text can be captured as it passes through the system
  • Video – requires capture of both agent and customer sides of the interaction, both visual and audio. If the agent portion is displayed on the agent screen alongside the customer portion, a single movie capture of the agent screen (e.g. an AVI file) is possible.

If the various types of interaction recording are fully and easily available, agents can be scored, targeted for further training or discipline, etc, helping to maintain high quality interactions and therefore customer satisfaction.

Multi-media Reporting

Reporting gives visibility on both what is happening, and what has happened, so that issues that threaten customer satisfaction can be addressed immediately.

Once again, multi-media interactions need a different approach to voice calls. Traditional key performance indicators (KPIs) for voice don’t work for text-based interactions or video, so new KPIs must be established for each media type.

And wouldn’t it be great if supervisors could respond directly from a reporting interface? e.g. move agents between media queues, or retrieve recorded interactions. This requires both a tight, unified approach to media queuing, and a seamless blend of data and system control in a unified interface. That’s quite a challenge.

If you are thinking “That’s all very well, but...” – it’s OK. A move toward supporting multi-media interactions need not be expensive. Remote agents and virtualisation are just as feasible as in a voice-only center, so the move would not require a complete overhaul of existing infrastructure.

Support for multiple media types throughout the contact center, rather like juggling, does present its challenges, but with the right approach, an awareness of the complications (look out for that chainsaw!) and the right software(!), all balls can be kept in the air and customer satisfaction can be kept high. Happy juggling!

Publish Date: April 20, 2012 10:59 AM


How to Juggle with 4 Balls (and a Chainsaw) – Part 1

The Challenge of Converged Multi-Media in the Contact Center

Success in the contact center is all about the customer experience. Meet (or exceed!) the customer’s expectations and all is well. Once upon a time, a bank that serviced both in-branch and phone interactions well had happy customers. Rather like juggling with 2 balls, it was relatively easy. These days, to keep the customer happy a contact center must manage voice, email, web chat, video, SMS (and others) successfully. Rather like juggling with 4 balls (and a chainsaw), this can be challenging. (Chainsaw? Read on.)

Customer expectations boil down to a combination of speed and quality of response, and each has its own technological challenges. In this blog, we look at speed of response. In the next, quality. Stay tuned.

Speed of Response – Tick, Tock

Customer expectations in terms of speed of response are quite different for each media type. For instance:

  • web chat, voice, video – the customer expects an almost instant response, i.e. within seconds
  • SMS (text) – not quite instant, but still prompt (say within 1 min)
  • email – not instant (say within 1 hour)

So how do you maximise your chance of meeting these expectations? By automatic assignment of sessions to agents. We could call this ASD (Automatic Session Distribution - the multi-channel equivalent of ACD). For this, the system needs both presence information and centralised and unified queue management.

Presence Information – Are You Free?

In order to assign any session to an agent, the system must know if he is available to receive it. Trouble is, the answer isn’t just a simple yes or no. Each agent may have complex rules about how many of each media type he can handle at once; for example, agent Jim, currently on a voice call, may not be available for an incoming web chat session, but agent Sarah, already dealing with two chat sessions, may be able to handle a third.

This is further complicated by skill level; not all agents who can email effectively are good at video chat, too.

Presence information is also essential for manual transfer; Jim, trying to transfer a web chat session, needs to know if Sarah can handle it, given the other sessions she is currently involved in.

Centralised Management - The Lone Juggler

In order to juggle media sessions between agents, the left hand must know what the right hand is doing. This requires a centralised and unified control center that knows the exact state of every agent and every queue at all times. Disparate systems held together with tape just won’t work.

Now, what happens if the level of activity on any queue exceeds the assigned agents’ ability to respond within the service level agreement (SLA)? Other agents should be automatically drafted in to help, and ideally they should be put back into the original queue when service levels allow.

This could also be done manually by a supervisor, who will need the presence info to determine who can go where. We will be looking at media queue blending in more detail in a future blog.

With all these facilities in place, a multi-media contact center should be able to keep all balls in the air successfully, delivering a fast response well within acceptable service levels.

So, which one is the chainsaw? Video, actually. This needs special handling, as we shall see next time, when we look at how contact center technology can enable quality of response.

Publish Date: April 20, 2012 10:31 AM


Do you know your predictive gain?

Did you ever hear the story about predictive gain? Hands up, please, those who know what it means. There's a couple of hands at the back and the rest of you are looking bemused. Not sure whether to fire our marketing manager (actually all of us), or tell all you users out there to take an intelligent interest in the technology you are buying.

The sad truth in our home market (the UK), and in fact most countries, is that many users are still buying predictive dialers without looking under the cover.

In most industries there is a reasonable expectation that technological leads of any kind are short-lived as competitors innovate and play catch-up. Consider the motor car. Doesn't matter what model or make; you know you can pick up just about any rental car and it's going to perform to a good standard. You can just start and go from cold, and it will get you to where you want to go, without you having to worry.

Now let's think about the call center market. Take an ACD, for example. Loads of them out there. Virtually all of them do a pretty good job. You can put a call on hold, make a transfer, etc. Not rocket science. And if you want more sophisticated facilities such as skills-based routing, that can usually be managed for a premium.

What about outbound dialing? Did I hear someone in the audience mutter that predictive dialers have become a commodity product? Maybe he works for the competition, or more likely he is just plain ignorant. Not necessarily his fault!

In a market where most vendors have not managed or bothered to innovate and update their products, you can expect to find that brand management and marketing hype just confuse the heck out of users. And that's the predictive dialing market for you.

But it is a serious matter. If you are a predictive dialer user and don't know what your predictive gain is, then it is a sure bet that your bottom line is suffering. And, especially for some outbound call centers, in these challenging times that can mean the difference between survival and calling in the receiver.

So what exactly is predictive gain, then? If you really care about getting the best performance out of your predictive dialer just give us a call and we will explain. Might be the best 5 minutes you ever invested in a phone call.

(Blog written in the skies over Iraq)

Publish Date: February 16, 2012 9:52 AM


5 must-haves for great call center reporting

Driving through a tunnel wearing sunglasses is not a good idea. Pretty obvious, isn’t it? It inhibits your ability to see what’s up ahead and react to road conditions, and makes it more likely that you will run into trouble.

Running a call center with poor reporting capability is very similar. It stops you from identifying trends, reacting to achieved targets or breached thresholds, and quickly responding to opportunities for improvement. Managers should be armed with the crucial information that allows them to make the best business decisions. But many performance management software packages just don’t deliver the right information, at the right time.

Here are 5 important requirements for a call center reporting package that actually does the job.

  1. All data ready to view
    The distinction between real-time and historical data is false, brought on mostly by technical limitations. Actually, real-time data becomes historical the moment it arrives, and users need to navigate through all data with no separation, being able to display a seamless mix of past and present. Quantity of data should not be an issue: the number of report users, number of agents or volume of archived history should have minimal impact on performance.

  2. Customization by non-programmers
    Every contact center environment (and even every project) is different, and every supervisor/ manager requires a particular set of data. In order to service a wide range of needs, every aspect of a performance measurement package should be customizable by the average user (not a programmer); for instance, setting and remembering a preferred layout, storing a list of frequently accessed reports, create new metrics (even data types) by relating and combining other data.

  3. Import from and export to other systems (e.g. CRM/ MIS).
    To see how contact metrics relate to other business intelligence, users need an integrated, holistic view from all corners of the enterprise, and across all media types, queues and locations.

  4. Interface - intuitive & easy-to-navigate
    Good reporting should take the pain out of finding what you are looking for. (The Google search engine is a great example.) It should enable you to dig, search and filter quickly and intuitively.

  5. Security - robust & comprehensive
    Joe, the average call center agent, probably should not have access to every detail of operation, but rather only certain queues, groups, campaigns or types of data. Fully customizable restrictions should be available to limit what each user can see.

Great call center reporting enables you to manage your business successfully by extracting maximum value from available resources. Anything less will almost certainly be costing you opportunities.

Publish Date: February 16, 2012 9:44 AM


Top 5 myths of outbound calling

In talking to predictive dialer users around the world, we come across many misconceptions about how predictive dialers actually work, and how to get the most from them. Here are the top 5, along with some suggestions for better practice.

  1. The longer I set my Ring No Answer (RNA) time, the greater my productivity will be.

    RNA is the duration an initiated call rings at the destination before being killed as a No Answer. The reality is that 95% of consumers answer the phone within 18 seconds, and setting an RNA longer than that just pushes line costs up and agent productivity down, not up! Not sure why? Just ask us.
     
  2. Answering machine detection (AMD) must be beneficial because it cuts down the number of non-live calls connected to my agents.

    Yes, it can help, but 85% detection accuracy is as good performance as you are likely to get in most cases, and even then there is a price to be paid. Firstly, you will be hanging up on live callers thinking that they are answering machines, putting you at risk of trouble with the regulators. Secondly, you are likely to be keeping the consumer waiting for 2-3 seconds before putting the call through, which just annoys people and lowers the quality of the call. Any attempt to go above 85% will make these two effects worse. But ask for a copy of our paper on whether to use AMD at all.
     
  3. To gain the maximum benefit from my calling list, I should pass through it once, then call all the answer machines, no answers and busies again.

    Batching a group of previous non-connects will probably lead to more non-connects, and agent left waiting for a live call. It is better to keep the connect rate steady, firstly by combining fresh list data with smart recycling of individual outcomes (e.g. calling answering machines at a different time the following day), and secondly, by preventing supervisors from cherry-picking data at the expense of overall campaign performance.
     
  4. Predictive dialers need to be managed according to observation, e.g.

    a.   slowing the dialing rate when there are too many abandoned calls
    b.   dialing at say the reciprocal of the average connect rate, or according to average talk times, or at ‘x trunks per agent’


    The problem is that during any campaign, conditions such as connect rate, agent numbers, long/ short talk times, are liable to roller coaster. If the dialing rate is either fixed or under the control of a manager, it can lead to agent thumb-twiddling on one hand, and silent calls on the other. It is better to use a dialer that reacts immediately and automatically to these changing conditions.
     
  5. Call blending is unproductive and should not be used.

    The problem here is that agents tend to be good at either outbound processes, or inbound, but not both. Deploying agents to work outside their skill area degrades call center performance.

    The answer is firstly to restrict blending to those (rare) agents with both inbound and outbound skills, and secondly to blend not only voice traffic but multiple media types as well. And where do you find multi-disciplinary agents to deal with email, chat, sms, social media, etc? Anybody with teenage children will see how naturally young people interact with multiple media sources. Your multi-disciplinary staffing issue could be a solution for youth unemployment!

If these myths are familiar to you, you are not alone. We hope the above suggestions will improve your use of existing call center software. If you would like more detail, please ask us (info@sytelco.com) for the white papers we have produced on these subjects.

Publish Date: November 21, 2011 11:08 AM

ABOUT US IN 60 seconds!

Latest Americas Newsletter
both ids empty
session userid =
session UserTempID =
session adminlevel =
session blnTempHelpChatShow =
CMS =
session cookie set = True
session page-view-total =
session page-view-total =
applicaiton blnAwardsClosed =
session blnCompletedAwardInterestPopup =
session blnCheckNewsletterInterestPopup =
session blnCompletedNewsletterInterestPopup =