Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Dana Gardner's BriefingsDirect for Connect.
Blog Home All Blogs
Longtime IT industry analyst Dana Gardner is a creative thought leader on enterprise software, SOA, cloud-based strategies, and IT architecture strategies. He is a prolific blogger, podcaster and Twitterer. Follow him at http://twitter.com/Dana_Gardner.

 

Search all posts for:   

 

Top tags: Dana Gardner  Interarbor Solutions  BriefingsDirect  HP  cloud computing  The Open Group  big data  SaaS  virtualization  VMWare  data center  enterprise architecture  Ariba  SOA  data analytics  HP DISCOVER  HPDiscover  Open Group Conference  Ariba Network  security  Tony Baer  desktop virtualization  Jennifer LeClaire  SAP  VMWorld  HP Vertica  IT  mobile computing  Ariba LIVE  Converged infrastructure 

Service providers gain new levels of actionable customer intelligence from big data analytics

Posted By Dana L Gardner, Monday, August 11, 2014

It’s no secret that communication service providers (CSPs) are under a lot of pressure as they make massive investments in upgraded networks while facing shrinking margins and revenues from their eroding traditional voice or broadcasting businesses.
Traditional operators understand that they must go beyond what they did before. They need to offer more compelling services to reduce churn and acquire new customers. But how to know what services customers want most, and how much to charge for them?

A key asset CSPs have is the huge amount of information that they generate and maintain. And so it's the analytics from their massive data sets that becomes the go-to knowledge resource as CSPs re-invent themselves.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Our next Big Data innovation discussion therefore explores how the telecommunication service-provider industry is gaining new business analytic value and strategic return through the better use and refinement of their Big Data assets.

To learn more about how analytics has become a business imperative for service providers, peruse this interview with Oded Ringer, Worldwide Solution Enablement Lead for HP Communication and Media Solutions. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the major trends leading CSPs to view themselves as being more data-driven organizations?

Ringer: CSPs are under a lot of pressure. On one hand, this industry has never been more central. Everybody is connected, spending so much more time online than ever before, and carrying with them small devices through which they connect to the network. So CSPs are central to our work and personal lives – as a result, they’re under lot of pressure.

Ringer

They’re under a lot of pressure, because they’re required to make massive investments in the networks, but they also need to deal with shrinking margins and revenues to subsidize these investments. So, at the end of the day, they’re squeezed between these two motions. 

One approach many CSPs have adopted in the last year was to reduce cost and to cut operations. But this is pretty much a trip to nowhere. Going into most basic services and commodity services is no way for these kinds of things to survive. 

In the last two to three years, more and more traditional operators understand that they must go beyond what they did before. They need to offer more compelling services to reduce churn and acquire new customers. They need to leverage their position as a central place between consumers and what they are looking for to become some kind of brokers of information

The key asset they have in their hand to become such brokers is the huge amount of information that they maintain. It’s exactly where analytics comes into play.

Talking about mobile

Gardner: When we say CSP and telecommunication companies these days, we’re more and more talking about mobile, right? How big a shift has mobile been in terms of the need to analyze use patterns and get to know what's really happening out in the mobile network?

Ringer: Mobile services are certainly the leading tool in most operator’s arsenals. Operators that have the subscriber “connected” with them wherever they go, around the clock, have an advantage over those that are more dependent upon or only provide tethered services. 

But we need to keep in mind that there’s also a whole space for analytics solutions that are related to fixed-line services, like cablesatellitebroadband, and other, landline services. CSPs are investing a lot in becoming more predictive, finding out what the subscriber really wants, what the quality of those services are at any given time, and how we can reduce churn in their customer base. 

Another kind of analytics practices that operators take is trying to be predictive in their investments in the network, understanding which network segments are used by more high-worth individuals, those that they do want to improve service to, beefing up those networks and not the other networks.

Again, it’s these mobile operators who are on the front lines of doing more with subscriber data and information in general, but it is also true for cable operators and pay-TV operators, and landline CSPs.

CSPs, unlike most enterprises, need to handle not only the structured data that’s coming from databases and so on, but also unstructured data.

Gardner: Oded, what are some of the data challenges specific to CSPs?

Ringer: In the CSP industry, Big Data is bigger than in any other industry. Bigger, first of all, in volume. There is no other industry that runs this amount of data – if you take into consideration they’re carrying everybody’s data, consumer and enterprise. But that’s one aspect and is not even the most complicated one. 

The more complicated thing is the fact that CSPs, unlike most enterprises, need to handle not only the structured data that’s coming from databases and so on, but also unstructured data, such as web communication, voice communication, and video content. They want to analyze all those things, and this requires analyzing unstructured data. 

So that’s a significant change in that type of process flow. They are also facing the need to look at new sets of structured data, data from IT management and security log files, from sensors and end-point mobile device telematics, cable set-top boxes, etc.

And two, in the CSP industry, because everything is coming from the wire, there’s no such thing as off-line analytics or batch analytics. Everything needs to be real-time analytics. Of course, this doesn’t mean that there will not be off-line or batch analytics, but even these are becoming more complex and span many more data sets across multiple enterprise silos.

More real time

If you analyze subscriber behavior right now and you want to make an offer to improve the experience that he’s having in real time, you need to capture the degradation of service right now and correlate it with what you know about the subscriber right now. So it's so much more real time than in any other industry. 

The market is still young. So it's very hard to say which one will be more dominant.

We’re not talking here about projects of data consolidation. It may be necessary in some cases, but that’s not really the practice that we’re talking about here. We’re talking about federating, referring to external information, analyzing in the context of the logic that we want to apply, and making real-time decisions.

In short, CSP Big Data analytics is Big Data analytics on steroids.

Gardner: What does a long-term solution look like, rather than cherry picking against some of these analytics requirements? Is there a more strategic overview approach that would pay off longer term and put these organizations in a better position as they know more and more requirements will be coming their way?

Ringer: Actually we see two kinds of behaviors. The market is still young. So it's very hard to say which one will be more dominant. We see some CSPs that are coming to us with a very clear idea on what business process they want to implement and how they believe a data-driven approach can be applied to it. 

They have clear model, a clear return on investment (ROI) and they want to go for it and implement it. Of course, they need the technology, the processes, and the business projects, but their focus is pretty much on a single use case or a variety of use cases that are interrelated. That’s one trend.

There’s another trend in which operators say they need to start looking at their data as an asset, as an area that they want to centralize. They want to control it in a productive manner, both for security, for privacy, and for the ability to leverage it to different purposes.

Central asset

Those will typically come with a roadmap of different implementations that they would like to do via this Big Data facility that they have in mind and want to implement. But what’s more important for them is not the quickest time to launch specific processes, but to start treating the data as a central asset and to start building a business plan around it. 

I guess both trends will continue for quite some while, but we see them both in the market sometimes even in the same company in different organizations.

Gardner: How does a CSP can really change their identity from being a pipe, a conduit, to being more of a rich services provider on top of communications?

And what is it that HP is bringing to the table? What is it about HP HAVEn, in particular, that is well suited to where the telecommunications industry is going and what the requirements are?

Ringer: HP has made huge investments in the space of Big Data in general and analytics in particular, both in-house developments, multiple products, as well as acquisitions of external assets. 

Complete platform

HAVEn is now the complete platform that includes multiple best-in-class product elements based multiple, cutting edge yet proven technologies, for exploiting Big Data and analytics. Our solution for the space is pretty much based on HAVEn and expanded with specific solutions for CSP needs, with a wide gallery of connectors for external data sources that exist within the CSP space. 

In short, we’re taking HAVEn and using it for the CSP industry with lots of knowledge about what traditional CSP operators need to become next-generation CSPs. Why? 

Because we have a very large group within HP of telecom experts who interact with and leverage what we’re doing in other industries and with many of the new age service providers like the AmazonsGooglesFacebook and Twitters of the world. We go a long way back in expertise in telecom -- but combine this with forward thinking customers and our internal visionaries in HP Labs and across our business units. 

Gardner: Just to be clear for our audience, HAVEn translates to HadoopAutonomyVertica, and Enterprise Security, along with a whole suite of horizontally and vertically integrated set of applications that are vertical industry specific. Is that right?

It’s coming from the business people that understand that they need to do something with the data and monetize it.

Ringer: Exactly.

Gardner: Tell me what you do in terms of how you reach out to communications organizations. Is there something about meeting them at the hardware level and then alerting them to what these other Big Data capabilities are? Is this a cross-discipline type of approach? How do you actually integrate HP services and then take that and engage with these CSPs?

Ringer: Those things exist, like engaging at a hardware level, but those are the less common go-to-market motions that we see. The more popular ones are more top-down, in the sense that we are meeting with business stakeholders who wants to know how to leverage Big Data and analytics to improve their business. 

They don’t care about the data other than how it’s going to be result in actionable intelligence. So, at the CSP level, it can be with marketing officers within the CSP who are looking to create more personalized services or more sticky services to increase the attention of their subscribers. They’re looking to analytics for that. 

It can be with business-development managers within the CSP organization that are looking to create models of collaboration with the Yahoos and Facebooks of the world, with retailers, or with any kind of other participants of their ecosystem where they can bring the ability to provide the pipe, back-end hosting of services and intelligence about how the pipe is providing the services and the sentiment of the customers on the other end of the pipe. 

They want to share information of value to their customers, making them dependent on them in new ways that aren’t just about the pipe thereby gaining new revenue streams. That’s the kind of motivation they have. It can be with IT folks as well, but at the end of the day the discussion about CSP Big Data isn’t coming from the technology. It’s coming from the business people that understand that they need to do something with the data and monetize it.

Then, of course, it becomes pretty quickly a technical discussion that the motion is business to technology, rather than infrastructure to technology. 

Support practice

We also developed the support practice within our organization that does exactly that, business advisory workshops. It’s for stakeholders of different roles to realize what the priorities are in using Big Data. What is the roadmap that they want to implement? 

The purpose of this exercise is to quickly bring everybody to the same room, sit together for a day or two, and come out with an agreement on how to turn themselves from conventional services to more personalized services and diversify the business channels via using information data.

For several years now, we have one large customer, Telefónica a Latin American conglomerate, has been working with us on analytics projects to improve the quality of experience of their subscribers. 

In Latin America, most people are interested in football, and many of them want to watch it on their mobile device. The challenge is that they all want to watch it during the same 90 minutes. That’s a challenge for any mobile operator, and that’s exactly where we started a critical project with Telefónica. 

We’re helping them analyze the quality of experience. Realizing the quality of the experience isn’t a very complicated thing. There are probes in the network to do that. We can pretty accurately get the quality of experience for every single video streaming session. It’s no big deal.

Analytics kicks in when you want to correlate this aggregation of quality with who the subscriber is, how the subscriber is expected to behave, and what he’s interested in. We know that the quality isn’t good enough for many subscribers during the football game, but we need to differentiate and know to which one of them we want to make an offer to upgrade his package. What’s the right offer? When’s the right time to make the offer? How many different offers do we test to zero in on the best set of offers?

We want to know which one of them we don’t want to promote anything to, but just want to make him happy. We want to give him a better quality experience for free, because he is a good customer and we don’t want to lose him. And we want to know which customer we want to come back to later, apologize, and offer him a better deal.

Real-time analytics

Based on real-time triggering of events from the network, degradation of quality with information that is ongoing about the subscriber, who the subscriber is, what marketing segment he belongs to, what package is he subscribed to and so on, we do the analytics in real time, and decide what the right action is and what the right move is, in order for us to give the best experience for the individual subscriber. 

It’s working very nicely for them. I like this example, first of all, because it’s real, but also because it shows the variety of processes we have here with correlation of real-time information with ongoing information for the subscribers. We have contextual action that is taken to monetize and to improve quality and to improve satisfaction. 

This example touches so many needs of an operator and is all done in a pretty straightforward manner. The implementation is rather simple. It’s all based on running the right processes and putting the right business process in place. But this isn’t always straightforward for enterprise customers, particularly those in the small to medium enterprise segment so imagine what CSPs could do for their customers once they’ve gotten a handle on this for their own businesses.

We have contextual action that is taken to monetize and to improve quality and to improve satisfaction. 

Gardner: It seems to me that that helps reduce the risk of a provider or their customers coming out with new services. If they know that they can adjust rapidly and can make good on services, perhaps this gives them more runway to take off with new services, knowing that they can adjust and be more agile. It seems like it really fundamentally changes how well they can do their business.

Ringer: Absolutely. It also reduces quite a lot the risk of investment. If you launch a new service and you find out that you need to beef up your entire network, that is a major hit for your investment strategy. At the same time, if you realize that you can be very granular and very selective in your investment, you can do it much more easily and justify subsequent investments more clearly.

Gardner: Are there any other examples of how this is manifesting itself in the market -- the use of Big Data in the telecommunication’s industry? 

Ringer: Let me give another example in North America. This is an implementation that we did for a large mobile operator in North America, in collaboration with a chain of retail malls. 

What we did there is combine their ongoing information that the mobile operator has about its subscribers -- he knows what the subscriber is interested in, what they’re prior buying pattern and transactions were and so on -- with the location information of where the individual person is at the mall. 

The mall operator runs a private wi-fi network there, so he has his own system of being able to track where the individual is exactly within the mall. He knows within two meters where a person is in the mall but with the map overlay of the physical mall and all product and service offerings to the same grid.

When we know a person is in the mall, we can correlate it with what the CSP knows about this person already. He knows that the specific person has high probability of looking for a specific running shoe. The mobile operator knows it because he tracks the web behavior of the specific individual. He tracks the profile of the specific individual and he can have pretty good accuracy in telling that this guy, for the right offer, will say yes for running shoes. 

Targeted and timely

So combining these two things, the ongoing analytics of the preferences, together with real-time location information, give us the ability to push out targeted and timely promotions and coupons.

Imagine that you go in the mall and suddenly you pass next to the shoe store. Here, your device pops up a message and that says right now, Nike shoes are 50 percent off for the next 15 minutes. You know that you’re looking for Nike shoes. So the chance that you’ll go into the store is very good, and the results are very good because you create a “buy-now or you’ll miss-out” feeling in the prospect. Many subscribers take the coupons that are pushed to them in this way. 

Of course, it’s all based on opt-in, and of course, it’s very granular in the sense that there are analytics that we do on subscriber information that is opted in at the level of what they allow us to look at. For instance, a specific person may allow us to look at his behavior on retail sites, but not on financial sites. 

Gardner: Again, this shows a fundamental shift that the communications provider is not just a conduit for information, but can also offer value-added services to both the seller and the buyer -- radically changing their position in their markets. 

If I am an organization in the CSP industry and I listen to you and I have some interest in pursuing better Big Data analytics, how do I get started? Where can I go for more information? What is it that you’ve put together that allows me to work on this rather quickly?

Ringer: As I mentioned before, we typically recommend engaging in a two-day workshop with our business consultants. We have a large team of Big Data advisory consultants, and that’s exactly what they do. They understand the priorities and work together with the telecom organizations to come up with some kind of a roadmap -- what they want to do, what they can do, what they are going to do first, and what they are going to do later. 

They all look to become more proactive, they all realize that data is an asset and is something that you need to keep handy, keep private, and keep secured.

That’s our preferred way of approaching this discipline. Overall, there are so many kinds of use cases, and we need to decide where to start. So that’s how we start. To engage, the best place is to go to our website. We have lots of information there. The URL is hp.com/go/telcoBigData, that’s one word, and from there you just click Contact Us, and we’ll get back to you. We’ll take you from there. There are no commitments, but chances are very good.

Gardner: Before we sign off, I just wanted to look into the future. As you pointed out, more and more entertainment and media services are being delivered through communication providers. The mobile aspect of our lives continues to grow rapidly. And, of course, now that cloud computing has become more prominent, we can expect that more data will be available across cloud infrastructures, which can be daunting, but also very powerful. Where do you see the future challenges, and what are some of the opportunities?

Ringer: We can summarize four main trends that we’re seeing increasing and accelerating. One is that CSPs are becoming more active in enabling new business models with partnerships, collaborations, internet players, and so on. This is a major trend. 

The second trend that we see increasing quite intensively is operators becoming like marketing organizations, promoting services for their own or for others.

The third one is more related to the operation of the CSP itself. They need to be more aware of where they invest, what’s their risk and probability of seeing an specific ROI and when will that occur. In short, Big Data and Analytics will make them smarter and more proactive in making the investments. That’s another driver that increases their interest in using the data. 

Overall they all look to become more proactive, they all realize that data is an asset and is something that you need to keep handy, keep private, and keep secured, but be able to use it for variety of use cases and processes to be ready for the next move. 

Tags:  big data  BriefingsDirect  CSPs  Dana Gardner  data analysis  HP  Interarbor Solutions  Oded Ringer  service providers  structured data  unstructured data 

Share |
PermalinkComments (0)
 

A gift that keeps giving, software-defined storage now showing IT architecture-wide benefits

Posted By Dana L Gardner, Monday, August 04, 2014

The next BriefingsDirect deep-dive discussion explores how one of the most costly and complex parts of any enterprises IT infrastructure -- storage -- is being dramatically improved by the accelerating adoption of software-defined storage (SDS).

The ability to choose low-cost hardware, to manage across different types of storage, and radically simplify data storage via intelligent automation means a virtual rewriting of the economics of data.

But just as IT leaders seek to simultaneously tackle storage pain points of scalability, availability, agility, and cost -- software-defined storage is also providing significant strategic- and architectural-level benefits.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We're joined by two executives from VMware to unpack these efficiencies and examine the broad innovation behind the rush to exploit software-defined storage, Alberto Farronato, Director of Product Marketing for Cloud Infrastructure Storage and Availability at VMware, and Christos Karamanolis, Chief Architect and a Principal Engineer in the Storage and Availability Engineering Organization at VMware. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Software-defined storage is changing something more fundamental than just data and economics of data. How do you see the wider implications of what’s happening now that software-defined storage is becoming more common?

Farronato: Software-defined storage is certainly about addressing the cost issue of storage, but more importantly, as you said, it’s also about operations. In fact, the overarching goal that VMware has is to bring to storage the efficient operational model that we brought to compute with server virtualization. So we have a set of initiatives around improving storage on all levels, and building a parallel evolution of storage to what we did with compute. We're very excited about what’s coming.

Gardner: Christos, one of my favorite sayings is that "architecture is IT destiny." How you see software-defined storage at that architectural level? How does it change the game?

Concept of flexibility

Karamanolis: The fundamental architectural principle behind software-defined storage is the concept of flexibility. It's the idea of being able to adapt to different hardware resources, whether those are magnetic disks, flash storage, or other types of non-volatile memories in the future.

Karamanolis

How does the end user adapt their storage platform to the needs they have in terms of the capabilities of the hardware, the ratios of the different types of storage, the networking, the CPU resources, and the memory resources needed for executing and providing their service to what's ahead?

That’s one part of flexibility, but there is another very interesting part, which is a very acute problem for VMware customers today. Their operational complexity of provisioning storage for applications and virtual machines (VMs) has been one way of packaging applications.

Today, customers virtualize environments, but also in general have to provision physical storage containers. They have to anticipate their uses over time and have make an investment up front in resources that they'll need over a long period of time. So they create those logical unit number (LUN) file services, or whatever that is needed, for a period of time that spans anything from weeks to years.

Software-defined storage advocates a new model, where applications and VMs are provisioned at the time that the user needs them. The storage resources that they need are provisioned on-demand, exactly for what the application and the user needs -- nothing more or less.

The idea is that you do this in a way that is really intuitive to the end-user, in a way that reflects the abstractions that user understands -- applications, the data containers that the applications need, and the characteristics of the application workloads.

So those two aspects of flexibility are the two fundamental aspects of any software-defined storage.

Gardner: As we see this increased agility, flexibility, the on-demand nature of virtualization now coupled with software-defined storage, how are organizations benefiting at a business level?

Farronato

Farronato: There are several benefits and several outcomes of adopting software-defined storage. The first that I would call out is the ability to be much more responsive to the business needs -- and the changing business needs -- in the form of what your application needs faster.

As Christos was saying, in the old model, you had to guess ahead of time what the applications will need, spend a lot of time trying to preconfigure and predetermine the various services levels, performance, availability and other things that our storage really would be required by your application, and so spend a lot of time setting things up, and then hopefully, down the line, consume it the way you thought you would.

Difficult change management

In many cases, this causes long provisioning cycles. It causes difficult change management after you provision the application. You find that you need to change things around, because either the business needs have changed or what you guessed was wrong. For example, customers have to face constant data migration.

With the policy-driven approach that Christos has just described -- with the ability to create these storage services on-the-fly for a policy approach -- you don’t have to do all that pre-provisioning and preconfiguring. As you create the VMs and specify the requirements, the system responds accordingly. When you have to change things, you just modify the policy and everything in the underlying infrastructure changes accordingly.

Responsiveness, in my opinion, is the one biggest benefit that IT will deliver to the business by shifting to software-defined storage. There are many others, but I want to focus on the most important one.

When you have to change things, you just modify the policy and everything in the underlying infrastructure will change accordingly.

Gardner: Can you explain what happens when software-defined storage becomes strategic at the applications level, perhaps with implications across the entire data lifecycle?

Karamanolis: One thing we already see, not only among VMware customers, but as a more generic trend, is that infrastructure administrators -- the guys who do the heavy-lifting in the data centers day in and day out, who manage much more beyond what is traditionally servers and applications -- are getting more and more into managing networks and data storage.

Find SDS technical insights and best practices on the VSAN storage blog.

Talking about changing models here, what we see is that tools have to be developed and software-defined storage is a key technology evolution behind that. These are tools for those administrators to manage all those resources that they need to make their day-to-day jobs happen.

Here, software-defined storage is playing a key role. With technology like Virtual SAN, we make the management of storage visible for people who are not necessarily experts in the esoterics of a certain vendor's hardware. It allows more IT professionals to specify the requirements of their applications.

Then, the software storage platform can apply those requirements on the fly to provision, configure, and dynamically monitor and enforce compliance for the policy and requirements that are specified for the applications. This is a major shift we see in the IT industry today, and it’s going to be accelerated by technologies like Virtual SAN.

Gardner: When you go to software-defined storage, you can get to policy level, automation, and intelligence when it comes to how you're executing on storage. How does software-defined storage simplify storage overall?

Distributed platform

Karamanolis: That's an interesting point, because if you think about this superficially, we’ll now go from a single, monolithic storage entity to a storage platform that is distributed, controlled by software, and can span tens or sometimes hundreds of physical nodes and/or entities. Isn’t complexity harder in the latter case?

The reality is that whether it's because of necessity or because we've learned a lot over the last 10 to 15 years about how to manage and control large distributed systems, that there is a parallel evolution of these ideas of how you manage your infrastructure, including the management of storage.

The user has to be exposed to the consequences of the policy they choose. There is a cost there for every one of those services.

As we alluded to already, the fundamental model here is that the end user, the IT professional that manages this infrastructure, expresses in a descriptive way, what they need for their applications in terms of CPU, memory, networking, and, in our case, storage.

What do I mean by descriptive? The IT professional does not need to understand all the internal details of the technologies or the hardware used at any point in time, and which may evolve over a period of time.

Instead, they express at a high level a set of requirements -- we call them policies -- that capture the requirements of the application. For example, in the case of storage, they specify the level of availability that is required for certain applications and performance goals, and they can also specify things like the data protection policies for certain data sets.

Of course, for all those things, nothing comes for free. So the user has to be exposed to the consequences of the policy that they choose. There is a cost there for every one of those services.

But the key point is that the software platform automatically configures the appropriate resources, whether they're arrayed across multiple physical devices, arrayed across the network, or whether they get asynchronous data as specified in a remote location in order to comply with certain disaster recovery (DR) policies.

All those things are done by the software, without the user having to worry about whether the storage underneath is highly available storage, in which case they need to be able to create only two copies of the data, or whether it is of some low-end hardware for which that would require three or four copies of the data. All those things are determined automatically by the platform.

This is the new mode. Perhaps I'm oversimplifying some of these problems, but the idea is that the user should really not have to know the specific hardware configurations of a disk array. If the requirements can not be met, it is because these new technologies are not incorporated into the storage platform.

Policy driven

Farronato: Virtual SAN is a completely policy-driven product, and we call it VM-centric or application-centric. The whole management paradigm for storage, when you use Virtual SAN, is predicated around the VM and the policies that you create and you assign to the VMs as you create your VMs, as you scale your environment.

One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store. In the past, you had to create individual LUNs or volumes, assign data services like replication or RAID levels to each individual volume, and then map the application to them.

With Virtual SAN, you're simply going to have a capacity container that happens to be distributed across a number of nodes in your cluster -- and everything that happens from that point on is just dropping your VMs into this container. It automatically instantiates all the data services by virtue of having built-in intelligence that interprets the requirements of the policy.

One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store.

That makes this system extremely simple and intuitive to use. In fact, one of the core design objectives of Virtual SAN is simplicity. If you look at a short description of the system, the radically simple hypervisor-converged storage means bringing that idea of eliminating the complexity of storage to the next level.

Gardner: We've talked about simplicity, policy driven, automation, and optimization. It seems to me that those add up very quickly to a fit-for-purpose approach to storage, so that we are not under-provisioning or over-provisioning, and that can lead to significant cost-savings.

So let’s translate this back to economics. Alberto, do you have any thoughts on how we lower total cost of ownership (TCO) through these SDS approaches of simplicity, optimization, policy driven, and intelligence?

Farronato: There are always two sides of the equation. There is a CAPEX and an OPEX component. Looking at how a product like Virtual SAN reduces CAPEX, there are several ways, but I can mention a couple of key components or drivers.

First, I'd call out the fact that it is an x86 server-based storage area network (SAN). So it leverages server-side components to deliver shared storage. By virtue of using server-side resources right off the bat there are significant savings that you can achieve through lower-cost hardware components. So the same hard drive or solid-state drive (SSD) that you deploy on a shared external storage array could be on the order of 80 percent cheaper.

The other aspect that I would call out that reduces the overall CAPEX cost is more along the lines of this, as you said, consume on-demand approach or, as we put it in many other terms, grow-as-you-go. With a scale-out model, you can start with a small deployment and a small upfront investment.

You can then progressively scale out as your environment grows by the much finer granularity that you would with a monolithic array. And as you scale, you scale both compute, but also IOPs  and that goes hand in hand with often the number of VMs that you are running out of your cluster.

System growth

So the system grows with the size of your environment, rather than requiring you to buy a lot of resources upfront that many times remain under-utilized for a long time.

On the OPEX side, when things become simpler, it means that overall administration productivity increases. So we expect a trend where individual administrators will be able to manage a greater amount of capacity, and to do so in conjunction with management of the virtual infrastructure to achieve additional benefits.

Gardner: Virtual SAN has been in general availability now for several months, since March 2014, after being announced last year at VMworld 2013. Now that it’s in place and growing in the market, are there any unintended benefits or unintended consequences from that total-cost perspective in real-world day-in, day-out operations?

The system grows with the size of your environment, rather than require you to buy a lot of resources upfront that many times remain under-utilized for a long time.

I'm looking for ways in which a typical organization is seeing software-defined storage benefiting them culturally and organizationally in terms of skills, labor, and that sort of softer metric.

Karamanolis: That’s a very interesting point. Our technologists sometimes tend to overlook the cultural shifts that technology causes in the field. In the case of Virtual SAN, we see a lot of, as one customer put it, being empowered to manage their own storage, in the vertical that we are controlling in their IT organization, without having to depend on the centralized storage organization in this company.
What we really see here is a shift in paradigm about how our customers use Virtual SAN today to enable them to have a much faster turnaround for trying new applications, new workloads, and getting them from test and dev into production without having to be constrained by the processes and the timelines that are imposed by a central storage IT organization.

This is a major achievement, and the major tool for VMware administrators in the field, which we believe is going to lead the way to a much wider adoption of Virtual SAN and software-defined storage in general.

Gardner: How does this simplification and automation have a governance, risk, and compliance (GRC) benefit?

Farronato: With this approach you have a more granular way to control the service levels that you deliver to your customers, to your internal customers, and a more efficient way to do it by standardizing through polices rather than trying to standardize service levels over a category of hardware.

Self-service consumption

You can more easily keep track of what each individual application is receiving, whether it’s in compliance to that particular policy that you specified. You can also now enable self-service consumption more easily and effectively.

We have, as part of our Policy-Based Management Engine, APIs that will allow for integration with cloud automation frameworks, such as vCloud Automation Center for OpenStack, where end users will be able to consume a predefined category of service.

It will speed up the provisioning process, while at the same time, enabling IT to maintain that control and visibility that all the admins want to maintain over how the resources are consumed and allocated.

You can also now enable self-service consumption more easily and effectively.

Gardner: I suppose there are as many on-ramps to software-defined data center as there are enterprises. So it's interesting that it can be done at that custom level, based on actual implementation, but also have a strategic vision or a strategic architectural direction. So, it's future-proof as well as supporting legacy.

How about some examples? Do we have either use-case scenarios or an actual organization that we can look to and say that they have deployed these VSAN and they have benefited in certain ways and they are indicative of what others should expect? 

Farronato: Let me give you some statistics and some interesting facts. We can look at some of the early examples where, in the last three months since the product has become available, we've found a significant success already in the marketplace, with a great start in terms of adoption from our customers.

Find SDS technical insights and best practices on the VSAN storage blog.

We already have more than 300 paying customers in just one quarter. That follows the great success of the public beta that ran through the fall and the early winter with several thousand customers testing and taking a look at the product. 

We are finding that virtual desktop infrastructure (VDI) is the most popular use case for Virtual SAN right now. There are a number of reasons why Virtual SAN fits this model from the scale out, as well as the fact that the hyper-converged storage architecture is particularly suitable to address the storage issues of a VDI deployment.

DevOps, or if you want, preproduction environments, loosely defined as test dev, is another area. There are disaster recovery targets in combination with vSphere Replication and Site Recovery Manager. And some of the more aggressive customers are also starting to deploy it in production use cases.

In the last three months since the product has become available, we've found a significant success already in the marketplace.

As I said, the 300 customers that we already have span the gamut in terms of size and names. We have large enterprises, banking, down to the smaller accounts and companies, including education or smaller SMBs. 

There are a couple of interesting cases that we'll be showcasing at VMworld 2014 in late-August. If you look at the session list, they're already available as actual use cases presented by our customers themselves.

Adobe will be talking about their massive implementation of Virtual SAN. And for their our production environment, on their data analytics platform, there will be another interesting use case with TeleTech talking about how they have leveraged Cisco UCS to progress VDI deployments.

VDI equation

Gardner: I'd like to revisit the VDI equation for a moment, because one of the things that’s held people up is the impact on storage, and the costs associated with the storage to support VDI. But if you're able to bring down costs by 50 percent, in some cases, using software-defined storage. That radically changes the VDI equation. Isn’t that the case, Christos, where you can now say that you can do VDI cheaper than almost any other approach to a virtualized desktop?

Karamanolis: Absolutely, and the cost of storage is the main impediment in organizations to implement a VDI strategy. With Virtual SAN, as Alberto mentioned earlier, we provide a very compelling cost proposition, both in terms of the capacity of the storage, as well as the performance you gain out of the storage.

You get the needs, both capacity and performance of your VDI workloads for a fraction of the cost you would pay for with a traditional disk array storage.

Alberto already touched on the cost of the capacity, referring to the difference in prices one can get from server vendors and from the market, as opposed to single hardware being procured as part of a traditional disk array.

I'd like to touch on something that is an unsung hero of Virtual SAN and of VDI deployment especially, and that's performance. Virtual SAN, as should be clear by now, is a storage platform that is strongly integrated with our hypervisor. Specifically, the data path implementation and the distributed protocols that are implemented in Virtual SAN are part of the ESXi kernel.

That means that, because of that, we can actually achieve very high performance goals, while we minimize the CPU cycles that are consumed to serve those high I/Os per second. What that means, especially for VDI, is that we use a small slice of the CPU and memory of every single ESXi host to implement this distributed software-driven storage controller.

It doesn't affect all the VMs that run on the same ESXi host, who have already published extensive and detailed performance evaluations, where we compare VDI deployments only on Virtual SAN versus using an external disk array.

And even though Virtual SAN use percentage is cut to be 10 percent of local CPU and memory on those hosts, the consolidation ratio, the number of virtual desktops we run on those clusters, is virtually unaffected, while we get the full performance that is realized with an external, all-flash disk array. So this is the value of Virtual SAN in those environments.

Essentially, you get the needs, both capacity and performance of your VDI workloads, for a fraction of the cost you would pay for with a traditional disk array storage.

Gardner: We're only a few weeks from VMworld 2014 in San Francisco, and I know there's going to be a lot of interest in mobile and in desktop infrastructure for virtualized desktops and applications.

Do you think that we can make some sort of a determination about 2014? Maybe this is the year that we turn the corner on VDI, and that that is a bigger driver to some of these higher efficiencies. Any closing thoughts on the vision for software-defined data center and VDI and the timing with VMworld. Alberto?

Last barrier

Farronato: Certainly, one of the goals that we set ourselves for this Virtual SAN release, solving the VDI use case, eliminating probably the last barrier, and enabling a broader adoption of VDI across the enterprise, and we hope that will materialize. We're very excited about what the early findings show.

With respect to VMworld and some of the other things that we'll be talking about at the conference with respect to storage, we'll continue to explain our vision of software-defined storage, talk about the Virtual SAN momentum, some of the key initiatives that we are rolling out with our OEM partners around things such as Virtual SAN Ready Nodes.

We're going to talk about how we will extend the concept of policy management and dynamic composition of storage services to external storage, with a technology called Virtual Volumes.

There are many other things, and it's gearing up to be a very exciting VMworld Conference for storage-related issues.

Gardner: Last word to you, Christos. Do you have any thoughts about why 2014 is such a pivotal time in the software-defined storage evolution?

Karamanolis: I think that this is the year where the vision that we've been talking about, us and the industry at large, is going to become real in the eyes of some of the bigger, more conservative enterprise IT organizations.

With Virtual SAN from VMware, we're going to make a very strong case at VMworld that this is a real enterprise-class storage system that's applicable across a very wide range of use cases and customers.

With actual customers using the product in the field, I believe that it is going to be a strong evidence for the rest of the industry that software-defined storage is real, it is solving real world problems, and it is here to stay.

Together with opening up some of the management APIs that Virtual SAN uses in VMware products to third parties through this Virtual Volumes technology that Alberto mentioned, we'll also be initiating an industry-wide initiative of making, providing, and offering software-defined storage solutions beyond just VMware and the early companies, mostly startups so far, that have been adopting this model. It’s going to become a key industry direction.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Tags:  Alberto Farronato  BriefingsDirect  Christos Karamanolis  Dana Gardner  Interarbor Solutions  Software-defined storage  vdi  Virtual SAN  VMWare  VMWorld 

Share |
PermalinkComments (0)
 

Advanced cloud service automation eases application delivery for global service provider NNIT

Posted By Dana L Gardner, Thursday, July 31, 2014

As a provider of both application development management and infrastructure outsourcing, Denmark-based NNIT needed a better way to track, manage and govern the more than 10,000 services across its global data centers.

Beginning in 2010, the journey to better overall services automation paved the way to far stronger cloud services delivery, too. NNIT uses HP Cloud Service Automation (CSA) to improve their deployment of IT applications and data, and to provide higher overall service delivery speed and efficiency.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how services standardization leads to improved cloud automation, BriefingsDirect spoke with Jesper Bagh, IT Architect and cloud expert at NNIT, based in Copenhagen. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about your company and what you do. Then, we’ll get into some of the services delivery problems and solutions that you've been tasked with resolving.

Bagh: NNIT is a service provider located in Denmark. We have offices around the world, China, Philippines, Czech Republic, and the United States. We’re 2,200 employees globally and we're a subsidiary of Novo Nordisk, the pharmaceutical company.

Bagh

My responsibility is to ensure for the company that business goals can be delivered through functional requirements, and in turning the functional requirements into projects that can be delivered by the organization.

We’re a wall-to-wall, full-service provider. So we provide both application development management and infrastructure outsourcing. Cloud is just one aspect that we’re delivering services on. We started off by doing service-portfolio management and cataloging of our services, trying to standardize the services that we have on the shelf ready for our customers.

That allowed us to then put offerings into a cloud, and to show the process benefits of standardizing of services, doing cloud well, and of focusing on the dedicated customers. We still have customers using our facility management who are not able to leverage cloud services because of compliance or regulatory demands.

We have roughly over 10,000 services in our data centers. We’re trying now to broaden the capabilities of cloud delivery to the rest of the infrastructure so that we get a more competitive edge. We’re able to deliver better quality, and the end users -- at the end of the day -- get their services faster.

Back in the good old days, developers were in one silo and operations were in another silo. Now, we see a mix of resources, both in operations and in development.

Full suite

We embarked on CSA together with HP back in 2010. Back then, CSA consisted of many different software applications. It wasn't really complete software back then. Now, it’s a full suite of software.

It has helped us to show to our internal groups -- and our customers -- that we have services in the cloud. For us it has been a tremendous journey to show that you can deliver these services fully automatically, and by running them well, we can gain great efficiency.

Gardner: How has this benefited your speed-to-value when it comes to new applications?

Bagh: The adoption of automation is an ongoing journey. I imagine other companies have also had the opportunity of adopting a new breed of software, and a new life in automation and orchestration. What we see is that the traditional operations divisions now suddenly get developers trying to comprehend what they mean, and trying to have them work together to deliver operations automatically.

Back in the good old days, developers were in one silo, and operations were in another silo. Now, we see a mix of resources -- both in operations and in development. So the organizational change management derived from automation projects is key. We started up, when we did service cataloging and service portfolio management, by doing organizational change to see if this could fit into our vision.

Gardner:  Now, a lot of people these days like to measure things. It’s a very data-driven era. Have you been able to develop any metrics of how your service automation and cloud-infrastructure developments have shown results, whether it’s productivity benefits or speeds and feeds? Have you measured this as a time-to-value or a time-to-delivery benefit? What have you come up with?

Value-add

Bagh: As part of the cloud project, we did two things. We did infrastructure as a service (IaaS), but we also did a value add on IaaS. We were able to deliver qualified IaaS to the life science industry fully compliant. That alone, in the traditional infrastructure, would have taken us weeks or months to deliver servers because of all the process work involved. When we did the CSA and the GxP Cloud, we were able to deliver the same server within a matter of hours. So that’s a measurable efficiency that is highly recognized.

Gardner:  For other organizations that are also grappling with these issues and trying to go over organization and silo boundaries for improvement in collaboration, do you have any words of advice? Now that you've been doing this for some time and at that key architect level, which I think is really important, what thoughts do you have that you could share with others, lessons learned perhaps?

Bagh: The lesson learned is that having senior management focus on the entire process is key. Having the organization recognized is a matter of change management. So communication is key. Standardization before automation is key.

You need to start out by doing your standardization of your services, doing the real architectural work, identifying which components you have and which components you don't have, and matching them up. It’s trying to do all the Lego blocks in order to build the house. That’s key. The parallel that I always use is there is nothing different for me as an architect than there is for an architect building a house.

The next step for us is to be more proactive than reactive in our monitoring and reporting capabilities, because we want to be more transparent to our customers.

Gardner:  Looking to the future, are there other aspects of service delivery, perhaps ways in which you could gather insights into what's happening across your infrastructure and the results, that end users are seeing through the applications? Do you have any thoughts about where the next steps might be?

Bagh: The next step for us is to be more transparent to our customers. So the vision is now we can deliver services fully automatically. We can run them semi-automatically. We will still do funny stuff from time to time that you need to keep your eyes on. But in order for us to show the value, we need to report on it.

The next step for us is to be more proactive than reactive in our monitoring and reporting capabilities, because we want to be more transparent to our customers. We have a policy called Open and Honest Value-Adding. From that, we want to show our customers that if we can deliver a service fully automatically and standardized, they know what they get because they see it in a catalog. Then, we should be able to report on it live for the users.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Cloud Service Automation  Dana Gardner  HP  HP CSA  HP DISCOVER  Interarbor Solutions  Jesper Bagh  NNIT 

Share |
PermalinkComments (0)
 

More than just an IT shift, cloud fuels the new engine of business innovation, says Oxford Economics survey

Posted By Dana L Gardner, Wednesday, July 30, 2014

Over the past five years, the impetus for cloud adoption has been primarily about advancing the IT infrastructure-as-a-service (IaaS) fabric or utility model, and increasingly seeking both applications and discrete IT workload support services from Internet-based providers.

But as adoption of these models has unfolded, it's become clear that the impacts and implications of cloud commerce are much broader and much more of a benefit to the business as a whole as an innovation engine, even across whole industries.

Recent research shows us that business leaders are now eager to move beyond cost and efficiency gains from cloud to reap far greater rewards, to in essence rewrite the rules of commerce.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Our latest BriefingsDirect discussion therefore explores the expanding impact that cloud computing is having as a strategic business revolution -- and not just as an IT efficiency shift. Join a panel of experts and practitioners of cloud to unpack how modern enterprises have a unique opportunity to gain powerful new means to greater business outcomes.

Our panelists are: Ed Cone, the Managing Editor of Thought Leadership at Oxford Economics; Ralf Steinbach, Director of Global Software Architecture at Groupe Danone, the French food multinational based in Paris; Bryan Acker, Culture Change Ambassador for the TELUS Transformation Office at TELUS, the Canadian telecommunications firm, and Tim Minahan, Chief Marketing officer for SAP Cloud and Line of Business Solutions. The panel is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What has the research at Oxford Economics been telling you about how cloud is reshaping businesses?

Cone: We did a survey for SAP last year, and that became the basis for this program. We went out to 200 executives around the world and asked them, "What are you doing in the cloud? Are you still looking at it for just process speed, efficiency, and cost cutting?"

Cone

The numbers that came back were really strong in terms of actually being a part of the business function. Beyond those basics, cloud is very much part of the daily reality of companies today.

We saw that the leading expectation for cloud to deliver significant improvement was in productivity, innovation, and revenue generation. So obviously process, speed, efficiency, and cost cutting are still very important to business, but people are looking to cloud for new lines of business, entering new markets, and developing new products.

In this program, what we did was take that information and go out to executives for live interviews to dive deep into how cloud has become the new engine of business, how these expectations are being met at companies around the world.

Gardner: Are businesses doing this intentionally, or are they basically being forced by what's happening around them?

Minahan: Increasingly, as was just indicated, businesses are moving beyond the IT efficiencies and the total cost of ownership (TCO) benefits of the cloud, and the cloud certainly offers benefits in those areas.

Minahan

But really what's driving adoption, what's moving us to this tipping point, is that now, by some estimates, 75 percent of all new investments are going into the cloud or hybrid models. Increasingly, businesses are viewing the cloud as a platform for innovation and entirely new engagement models with their customers, their employees, their suppliers and partners, and in some cases, to create entirely new business models.

Just think about what cloud has done for our personal lives. Who would have thought that Apple, a few years ago, would be used to run your home. This is the Apple Home concept that allows you to monitor and manage all of your devices -- your air-conditioning, your alarm, music, and television -- remotely through the cloud.

There's even the quasi business B2B and B2C models around crowd sourcing and crowd funding from folks like Kickstarter or payment offerings like Square. These are entirely new engagement models, new business models that are built on the back of this emergence of cloud, mobile, and social capabilities.

Gardner: Right, and it seems that one of these benefits is that we can cross boundaries of time, space geography, what have you, very easily, almost transparently, and that requires new thinking in order to take advantage of that.

Bryan, at TELUS as Culture of Change Ambassador, are you part of the process for helping people think differently and therefore be able to exploit what cloud enables?

Flexible work schedule

Acker: One hundred percent. It's actually a great segue, because at TELUS we have a flexible work arrangement, where we want 70 percent of our employees to be working either from home or remotely. What that means is we have to have the tools and the culture in place that people understand, that they can access data and relevant information, wherever they are.

Acker

It doesn't matter if they're at home, like I am today, on the road, or at a client site, they need to be able to get the information to provide the best customer experience and provide the right answer at the right time.

So by switching from some of the great tools we already offered, because collaboration is part of TELUS’s cultural DNA, we've actually been able to tear down silos we didn't even know we were creating.

We were trying to provide all the tools, but now people have an end-to-end view of every record for customers, as well as employees and the collaboration involving courses and learning opportunities. They have access to everything when they need it and they can take ownership of the customer experience or even their own career, which is fantastic for us.

Gardner: Ralf, at Danone, as Director of Global Software Architecture, you clearly have your feet on the IT path and you've seen how things have evolved. Do you see the shift to cloud as a modest evolution, or is this something that changes the game?

Steinbach: We've been looking at cloud for quite sometime now. We've started several projects in the cloud, mainly in two areas. One involves the supporting functions of our business which is HR, travel expenses, and mail. There, we see a huge advantage of using standardized services in the cloud.

Steinbach

In these functions we do not need any specifics. The cloud comes standard and you can not change, as you can with SAP systems. You can't adapt the code. But that is one area where we think there's value in using cloud applications.

The other area where we really see the cloud as valued is in our digital marketing initiatives. There, we really need the flexibility of the cloud. Digital marketing is changing every day. There's a lot of innovation there and there the cloud gives us flexibility in terms of resources that we need to support that. And, the innovation cycles of our providers are much faster than they would be on premises. These are the two main areas where we use the cloud today.

Cone: Ralf, it was interesting to me, when I was reading through the transcript of your interview and working on the case studies we did, that it is even changing business models. It's allowing Danone to go straight to the consumer, where previously your customer had been the retailer. Cloud in new geographic markets is letting you reach straight to the end user, the end buyer.

Digital marketing

Steinbach: That's what I meant when I talked about digital marketing. Today, all consumer product goods company like Danone  are looking at connecting to their consumers and not to the retailers as in the past. We're really focusing on the end-consumer, and the cloud offers us new possibilities to do that, whether it is via mobile applications or websites and so on.

One thing that's important is the flexibility of the systems, because we don't know how many consumers we'll address. It can be a few, but it could be over a million. So we need to have a flexible architecture, and on-premise we could not manage that.

Gardner: The concept of speed seems to come up more and more. We're talking about speed of innovation, agility, direct lines of communication to customers and, of course, also supply-chain direct communication speed as well. How prominent did you see speed and the need for speed in business in your recent research?

We're really focusing on the end consumer, and the cloud offers us new possibilities to do that.

Cone: Well, speed was important -- and it's speed across different dimensions. It's speed to enter a new market or it's speed to collaborate within your own company, within your own organization.

This idea of taking IT and pushing it out to the people, to the customer, and really to the line of business allows them to have intimate contact and to move quickly, but also to break down these barriers of geography.

We did a case study with another large company, Hero, which is a large maker of motorcycles and two wheeled vehicles in India. What they're doing with cloud- enabled customer-facing technology is moving their service operation outside of dealerships into the countryside, out across India. They go to parks and they set up what they call service camps.

There, the speed element is the speed and the convenience with which you are able to get your bike serviced, and that's having a large measurable impact on their business. So it is speed, but it is speed across multiple dimensions.

New innovation

Minahan: At the core, the cloud is really all about unlocking new innovations, providing agility in the business, allowing companies to be able to adapt their processes very, very quickly, and even create entirely new engagement models, and that's what we are seeing.

It is not just the cloud, though. This convergence of cloud, big data, analytics, mobile and social, and business networks really ushers in ultimately a new paradigm for business computing, one where applications are no longer just built for enterprise compliance or to be the system of record. Instead, they're really designed to engage and empower the individual user.

It's one that ushers in a new era of innovation for the business, where we can enable new engagement models with customers, employees, suppliers, and other partners.

We've heard some great examples here, but some others were very similar to the experience that Danone has seen. T-Mobile is leveraging the cloud not to replace its traditional systems of records, but to extend them with the cloud, to create a new model for social care, helping monitor conversations on its brand, and engage customer issues across multiple channels.

This convergence of cloud, big data, analytics, mobile and social, and business networks really ushers in ultimately a new paradigm for business computing.

So not just their traditional support channels, but Twitter and Facebook, where these conversations are happening and really it is empowered them to deliver what has become a phenomenal kind of “Cinderella-worst-to-first” story for customer support and satisfaction.      

Now, they're seeing first time resolution rates that have gone from the low teens to greater than 94 percent. Obviously, that has a massive impact on customer satisfaction and renewals and is all powered by not throwing out the systems that they've used so long, but by extending them with the cloud to achieve new innovations and then drive new engagement models.

Gardner: Tim, another factor here, in a sense, levels the playing field. When you move to the cloud, small-to-medium-sized enterprises (SMBs) can enjoy the same benefit that you just described for example from T-Mobile. Are you at SAP seeing any movement in terms of the size or type of organizations that can exploit these new benefits?

Minahan: What's interesting, Dana, is that you and I have been around this industry for quite some time and the original thought was that the cloud was the big, democratized computing power.

It allowed SMBs to get the same level of applications and infrastructure support that their larger competitors have had for years. That's certainly true, but it is really the large enterprises that have been aggressively adopting this on an equal pace with their SMBs.

All sizes of companies

The cloud is being used to not only accelerate process efficiency and productivity, but to unlock innovations for all sized companies. Large enterprises like UPS, Deutsche Bank, and Danone are using cloud-based business applications. In the case of UPS and Deutsche Bank, they're using business networks to extend their traditional supply chain and financial systems to collaborate better with their suppliers, bankers, and other partners.

It's being used by small upstarts as well. These are companies that we talked about in the past like Mediafly, a mobile marketing start-up. It's using dynamic discounting solutions in the cloud to get paid faster, fund development of new features, and take on new business.

There's Sage Health Solutions, a company started by two stay-at-home moms in South Africa that is really grown from zero to a multi-million dollar operation. That is all powered by the leveraging the cloud to enable new business models.

Cone: To follow on with what Tim said about the broad gamut of usage from company sites and also earlier mentioning mobile, what we saw in our survey is that mobile is of great importance to companies as a way of reaching their customers for internal productivity as well. But reaching customers is actually a higher priority and that comes down to the old adage: You have to fish where the fish are.

The cloud is being used to not only accelerate process efficiency and productivity, but to unlock innovations for all sized companies.

Look at what Danone is doing when they're setting up direct-to-customer technologies and marketing. They're going into markets where people don't necessarily have laptops or landlines. They're leapfrogging that to a world where people have mobile devices.

So if you have mobile customers, and as Tim said, think of the consumer experience, that is how we all live our lives now. No matter what size your company is, you have to reach your customers the way your customer lives now -- and that is mobile.

Gardner: Tell us a little bit about your research, how you have gone about it, and how that new level of pervasive collaboration was demonstrated in your findings.

Baseline information

Cone: In terms of the research, as I said, we went out to 200 execs around the world and asked them a series of questions about what their investment plans were. It was baseline survey information. What are you doing in the cloud, how much of it are you doing, and what are the key benefits that you're getting?

Then, as we went deeper in this phase of the project, we found that collaboration has different meanings. It can be collaboration within the company. It can be with partners, which cloud platforms allow you to do more easily. It's also this key relationship, a key area of collaboration between IT and the business.

What we see in this research is that IT is increasingly seen as a partner for the business as a way of driving revenue via the cloud. But across the four regions that we surveyed --  North America, Latin America, EMEA and APAC -- we saw a very high percentage of companies say that they see that IT is emerging as a valued partner of the business, not just a support function for the business. I think that's a key collaborative relationship that I'm sure our guests are seeing in their own companies.

Gardner: Just to be clear, Ed, this is ongoing research. You're already back in the field and you'll be updating some of these findings soon?

We're really interested to see how people are doing compared to the targets they set and what their new targets are.

Cone: Yes, we're really excited about that, Dana. We did this survey last year for SAP. Then, we jumped in about a year later using those numbers and did these in-depth research interviews to look at the use of the cloud to drive business. This summer, we're refielding the survey to see how things have changed and to see how the view of the future has changed.

We ask a lot of questions about where they are now, and where they think they'll be in three years. We're really interested to see how people are doing compared to the targets they set and what their new targets are. So we will have some fresh numbers and fresh reports to talk to you about by Q3 or Q4.

Gardner: Let us look into those actual examples now and go back to Bryan at TELUS.

Acker: I have a tangible example that might help express the value of collaboration at TELUS and something that people don't think about, and that is safety.

We have a lot of field technicians who are in remote areas, but have mobile access. A perfect example is that we can go into situation where a technician may be a little unsure of what to do in a situation and it's potentially unsafe.

Because of the mobile access and the cloud, we've enabled them to quickly record a video, upload it directly to our SAP Jam system, which is our collaborative tool suite that we use, and share it with a collection of other technicians, not just the person they can call.

Safer situation

What happens is then people can say this is unsafe, you need to do X, Y and Z. We can even push them required training, so they can be sure that they're making the right decision. All of a sudden, that becomes a safer situation and the technician is not putting themselves at risk. This is really important because people do not think of those real, tangible examples. They often feel that they're just sharing information back and forth.

But in terms of what we are doing and where we are going, I sit in HR, and we're trying to improve the business process. We now have all of our information, the system of record, an integrated learning management system (LMS), our ability to analyze talent, so we make the correct hires.

We now trust the information implicitly and we're able to make the correct decision, whether it means customer information, recruiting choices, hiring choices, or performance choices.

Now, we're in a situation where we're only going to maximize and try to leverage the cloud for even more innovation, because now people are singing from the same choir sheet, so to speak.

We now trust the information implicitly and we're able to make the correct decision, whether it means customer information, recruiting choices, hiring choices, or performance choices.

We have access to the same system or record of truth, and that's the first time we've had that. Now, recruiting can talk to learning, who can talk to  performance, who can talk to technicians and we know they all get a consistent version of the truth. That is really important for us.

Gardner: Those are some excellent examples of how mobile enhances cloud. That extends the value of mobile. That brings in collaboration and, at the same time, creates data and analysis benefits that can then be fed back into that process.

So there really is a cyclical adoption value here. I'd like to go back to the cultural part of this. Bryan, how do you make sure that that adoption cycle doesn't spin out of control? Is there a lack of governance? Do you feel like you can control what goes on, or are we perhaps in the period of creative chaos that we should let spin off on its own in any way?

Acker: That’s a great question, and I'm not sure if TELUS handles this in a unique way, but we definitely had a very detailed plan. The first thing we did was have collaboration as one of our valued attributes or one of our leadership competencies. People are expected to collaborate, and their performance review is dependent on that.

What that means is we can provide tools to say that we're trying to facilitate collaboration. It doesn't mean matter if you're collaborating through a phone call, through a water-cooler chat, or through technology. Our employees are expected to collaborate. They know that it’s part of their performance cycle and it’s targeted towards their achievements for the year. We trust them to do the right thing.

We actually encourage a little bit of freedom. We want to push the boundaries. Our governance is not so tight that they are afraid to comment incorrectly or afraid to ask a tough question.

Flattening the hierarchy

What we're seeing now is individual team members are challenging leadership positions on specific questions, and we're having an honest and frank discussion that’s pushing the organization forward and making us make the accurate correct choice at all time, which is really encouraging. Now, we're really flattening our hierarchy and the cloud is enabling us to do that.

Gardner: That sounds like a very powerful engine of innovation, allowing that freedom, but then having it be controlled, managed, and understood at the same time. That’s amazing. Ed, do you have any reactions to what Bryan just said about how innovation is manifesting itself newly there at TELUS?

Cone: When we spoke to TELUS, I was intertested in that cultural aspect of it. I'm sure the guys on the call would disagree with me on a technical level, but we like to say that technology is easy, and culture is hard. The technology works, and you implement it and you figure that out, but getting people to change is really difficult.

The example that we use in the case study, SAP on TELUS, was about changing culture through gamification, allowing people to learn via an online cloud-based virtual game. It was this massive effort and it engaged a huge number of employees across this large company.

It really shifted the employee culture, and that had an impact on customer service and therefore on business performance

It really shifted the employee culture, and that had an impact on customer service and therefore on business performance. It’s a way that the cloud is moving mountains and it’s addressing the hard thing to change, which is human behavior and attitudes.

Minahan: We talk about the convergence of these different technologies in cloud, social, and mobile and ultimately we had this convergence going on in technology that we talked about all the time. There is massive change going on in the workforce and what constitutes the workforce.

Bryan talked about how there is a leveling of the organization, doing away with the traditional hierarchical command and control, where information is isolated in the hands of a few, and the new eager employees doesn’t get access to solving some of the tough problems. All that’s being flattened and accelerated and powered by cloud and social collaboration tools.

Also, we're seeing a shift in what constitutes the workforce. One of the biggest examples is the major shift in how companies are viewing the workforce. Contingent and statement of work (SOW) workers, basically non-payroll employees, now represent a third of the typical workforce. In the next few years, this will grow to more than half.

It’s already occurring in certain industries, like pharmaceuticals, mining, retail, and oil and gas. It's changing how folks view the workforce. They're moving from a functional management of someone -- this is their job; this is what they do -- to managing pools of talent or skills that can be rapidly deployed to address a given problem or develop a new innovative product or service.

These pools of talent will include both people on your payroll and off your payroll. Tracking, managing, organizing, and engaging these pools of talent is only possible through the cloud and through mobile, where multiple parties from multiple organizations could view, access, collaborate, and share knowledge and experiences running on a shared-technology platform.

Customer is evolving

Acker: That extends quite naturally to the customer. The customer is evolving faster than almost anything and they expect 24x7 access to support. They expect authentic responses and they now have access to just as much information as the customer service agent.

Without mobile, if you can't connect with those customers and be factual, you're in trouble. Your customers are going to reply in social-media channels and in public forums, and you're going to lose business and you're going to lose trust with your existing customers as well.

Minahan: I fully agree. The only addition to that is that they also expect to be able to engage you through any channel, whether it’s their mobile phone, their laptop, or in some cases, directly face to face, on the phone, or in a retail outlet and have the same consistent experience and not need to reintroduce who they are and what their problem as they move from channel to channel.

Gardner: Clearly we're seeing how things that just weren’t possible before the cloud are having pervasive impacts on businesses. Let’s look at a new business example, again with Danone. Ralf, tell us a little bit about how cloud has had strategic implications for you. You have many brands, many lines of business. How is cloud allowing Danone to function better as a whole?

The cloud is definitely the best option for us to start these new businesses and connect to all consumers. 

Steinbach: We have a strategy around digital marketing and, as you know, we're operating in almost every country in the world. Even though we're a big company, locally, we're sometimes quite small. We're trying to build up new markets in emerging countries with very small investments in the beginning. There, the cloud is definitely the best option for us to start these new businesses and connect to all consumers.

Money matters, even for a big company like Danone. That’s very important for us. If you look at Africa, there are completely different business models that we need to address.

People in Africa pay with their mobile phones. Some sell yogurt on a bicycle. Women pick up some yogurt in the morning and then they sell them on the road. We need to do businesses with these people as well. Obviously, an enterprise resource planning (ERP) system isn't able to do that, but the cloud is a much better adapted platform to do this sort of business.

Gardner: The C-suite likes to look at numbers. How do we measure innovation?

Metrics lacking

Cone: We're doing some research on another program right now on that very topic for a non-SAP program. That is showing us that metrics for success on basic things like key performance indicators (KPIs) for progress of migration into the cloud are lacking at a lot of companies. Basic return on investment (ROI) numbers are lacking at a lot of companies.

We're really old school. To go back to your definition of what a business is, we think it’s an organization that’s set up to make money for shareholders and deliver value for stakeholders. By those measures, at least by dotted line, the key metrics are your financial performance? Are you entering, as we mentioned before, new markets and creating new products?

So the metrics we're seeing that are cloud specific aren't universal yet. In a broader sense, as cloud becomes an everyday set of tools, the point of those tools is to make the business run better, and we are seeing a correlation between effective use of the cloud and business performance.

There are entirely new engagement models and business models that the companies hadn’t even thought of before.

Minahan: What the cloud, mobile, and social bring to bear in addition to new collaboration models is that they kick off an unbelievable amount of new information, and oftentimes not in a structured way. There's a need to aggregate that information and analyze that in new ways to detect and predict propensity modeling on your customers, your supply chain, and your employees. Progression and development are extremely powerful.

I think we’ve just scratched the surface. As an industry, we provided the channels through which to collaborate, as we heard today. There are entirely new engagement models and business models that the companies hadn’t even thought of before. Once you have that information, once you have that connectivity, once you have that collaboration, you can begin to investigate and trial and error.

To answer your question about measurement on this, yes, we need measurement of the business process and the business outcome. Let’s not forget why companies adopt technology. It’s not just for technology sake. It’s to effect the change. It’s to effect more efficiency, greater productivity, and new engagement capabilities.

Measuring the business benefit is what we're seeing and what we’re advising our customers to do. And rather than just measuring, are we tracking towards an adoption of having more cloud in our infrastructure portfolios.

The focus today is largely driven by the fact that the lines of business are now more engaged in the buying decision and in shaping what they want from a technology standpoint to help them enable their business process. So the metrics have shifted from one of speeds and feeds and users to one of business outcomes.

Gardner: Bryan at TELUS in Toronto, you're closely associated with the human resources productivity and the softer metrics of the employee involvement and dedication that sort of thing. Are there any ways that you can think of that cloud adoption and innovation, as we’ve been describing, has this unintended set of consequences when it comes to employee empowerment or that innovation equation? How do you view measuring success of cloud  adoption?

Simplifying the process

Acker: We measure our customers success by the likelihood to recommend. Will a TELUS customer recommend our services and products to friends, family, and peers?

We measure internal success by our employee engagement metric. If the customers are satisfied and the employees are engaged and fulfilled at work, that means that we're probably moving in the right direction. We can kind of reverse engineer to see what changes are helping us. That allows us to take our information and innovation from the cloud and inspire better behaviors and better process.

We can say, "You know what, in this pocket we’ve analyzed that our customers are likely to recommend it higher than anywhere else in Canada. What are they doing?" We can look back through the information shared on the cloud and see the great customer success stories or the great team building that’s driving engagement through the roof.

We can say, "This is the process we have to replicate and spread throughout all of our centers." Then, we can tweak it for cultural specifics. But because of that, we can use the cloud to inspire better behavior, not just say that we had 40,000 users and 2,000 hits on this blog post. We're really trying to get away from the quantitative and get into the qualitative to drive change throughout the organization.

Gardner: What comes next? Where do you see the impacts of cloud adoption in your business over the next couple of years?

Steinbach: There are still some challenges in front of us. One of the challenges is China. China is one of the biggest markets, but cloud services are not always available or they're very slow. If your cloud solution is hosted outside of China, there's a big problem. These are probably technical challenges, but we have to find solutions with our partners there, so that they can establish their services in China.

That’s one of the challenges. The other is that that the cloud might change the role of IT in our organization. In the past we owned the systems and the applications. Today, the business can basically buy cloud services with a credit card. So you could imagine that they won’t need us anymore in the future, but that's not true.

As an IT organization, we probably have to find our role inside the organization, from just providing solutions or hardware to being an ambassador for the business and to help them to make the right decisions. There are still problems that will remain as the integration between different applications. It doesn’t get easier in the cloud, so that’s where I see the challenge.

And last but not least, it's about security. We take that really seriously. If we store data, whether it's employees or of our consumers, we have to make sure that that our cloud providers have the same standards of security and there are no leaks. That’s very, very important for us. And there are legal aspects as well.

We've just started. There are still a lot of things to do in the next few years, but we're definitely going on with our strategy towards the cloud and toward mobile. And, at the end of the day, it all fits together. I think it was said before that it's not only cloud, but it's the big data, collaboration, and mobile. You have to see the whole thing as one package of opportunities.

Important challenges

Gardner: What do you think might be some of the impacts a few years from now that we're only just starting to realize?

Acker: On a more positive note, which is just the other side of the coin, obviously the challenges are there, but we're actually just starting to be able to experience the fact that innovation at TELUS is moving faster than it used to. We're no longer dependent on the speed at which our pre-assigned resources can make change and develop new products.

IT can now look at it from a more strategic point of view, which is great. Now, we're maximizing quarterly releases from systems that are leveraging the input from multiple companies around the world, not just how fast our learning team can develop something or how fast our IT team can build new functionality into our products.

We're no longer limited by the resources, and innovation is flying forward. That, for us, is the biggest unexpected gain. We're seeing all this technology that used to take months or years to change now on a quarterly release schedule. This is fantastic. Even within a year of being on our cloud-computing system, we're so happy, and that is inspiring to people. They're maximizing that and trying to push the organization forward as well. So, that’s a real big benefit.

Gardner: Tim, do you have any thoughts about where this can lead us in the next few years that we haven’t yet hit upon, things you're just starting to see the first really glimmers of it?

I think the biggest thing is that the cloud is going to unlock new business models and new organization models.

Minahan: A lot of it has been touched on here. We're seeing a massive shift in what the role of IT is, moving from one of deploying technology and integrating things to really becoming business process experts.

We talked a bit about the amount of data and the insights that are now available to help you better understand and predict the appetites of your customers to help you even determine when your machines might fail and when it's time to reorder or set a service repair.

I think the biggest thing is that the cloud is going to unlock new business models and new organization models. We talked a bit about TELUS and their work patterns, in which most of the workers are remote and how they are engaging the field service technicians in the field.

We talked about the growing contingent workforce and how the cloud is enabling folks to collaborate, onboard, and skill up those employees, non-payroll employees much more quickly. We're going to see your new virtual enterprises. We're talking about borderless enterprises that allow you to organize not just pools of talent, but entire value chains, and be able to collaborate in a more much transparent way.

We mentioned before about Apple Home. You're beginning to see it with 3D printers. It's this whole idea where more and more companies become digital businesses. This isn’t just about on-the-channel commerce providing a single customer experience across multiple channels.

It's actually about moving more and more of what you deliver, the solutions you deliver, the former products your deliver, to digital bits that can be tested, experienced, and downloaded all online.

All of this is being empowered by this massive convergence of cloud, mobility, social and business networks, and big data. 

What comes next

Cone: To follow on what Tim said about the borderless enterprise, when we ask people what’s in the cloud now and what’s going to be substantially cloud based in three years, three of the highest growth areas were innovation in R and D, supply chain, and HR. All of those go straight to this idea that boundaryless digital enterprises are emerging and that cloud will be the underpinning of these enterprises.

We're working with Tim right now on a big global study about the workforce. When I talk about culture and the way companies function internally, a year ago, when we started this research, HR was the least likely function of the ones we queried to be in the cloud, and it's going to have massive growth in the next couple of years.

These stories start to converge of boundaryless and culture, all coming together via the cloud.

These stories start to converge of boundaryless and culture, all coming together via the cloud. That’s the segue to say that we're really excited to see how these numbers look when we refield this survey this summer, because that progress is snowballing and accelerating beyond even what people thought it was the last time we asked them.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP.

You may also be interested in:

Tags:  BriefingsDirect  Bryan Acker  cloud computing  Dana Gardner  Ed Cone  Interarbor Solutions  Oxford Economics  Ralf Steinbach  SAP  Tim Minahan 

Share |
PermalinkComments (0)
 

How UK data solutions developer Systems Mechanics uses HP Vertica for BI, streaming and data analysis

Posted By Dana L Gardner, Wednesday, July 23, 2014

Three years ago, Systems Mechanics Limited used relational databases to assemble and analyze some 20 different data sources in near real-time. But most relational database appliances used 1980s technical approaches, and the ability to connect more data and manage more events capped off. The runway for their business expansion just ended.

So Systems Mechanics looked for a platform that scales well and provides real-time data analysis, too. At the volumes and price they needed, HP Vertica has since scaled without limit ... an endless runway.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how Systems Mechanics improved how their products best deliver business intelligence (BI), analytics streaming, and data analysis, BriefingsDirect spoke with Andy Stubley, Vice President of Sales and Marketing at Systems Mechanics, based in London. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner:  You've been doing a lot with data analysis at Systems Mechanics, and monetizing that in some very compelling ways.

Stubley: Yes, indeed. System Mechanics is principally a consultancy and a software developer. We’ve been working in the telco space for the last 10-15 years. We also have a history in retail and financial services.

Stubley

The focus we've had recently and the products we’ve developed into our Zen family are based on big data, particularly in telcos, as they evolve from principally old analog conversations into devices where people have smartphone applications -- and data becomes ever more important.

All that data and all those people connected to the network cause a lot more events that need to be managed, and that data is both a cost to the business and an opportunity to optimize the business. So we have a cost reduction we apply and a revenue upside we apply as well.

Quick example

Gardner: What’s a typical way telcos use Zen, and that analysis?

Stubley: Let’s take a scenario where you’re looking in network and you can’t make a phone call. Two major systems are catching that information. One is a fault-management system that’s telling you there is a fault on the network and it reports that back to the telecom itself.

The second one is the performance management system. That doesn’t specify faults basically, but it tells you if you’re having things like thresholds being affected, which may have an impact on performance every time. Either of those can have an impact on your customer, and from a customer’s perspective, you might also be having a problem with the network that isn’t reported by either of the systems.

We’re finding that social media is getting a bigger play in this space. Why is that? Now, particular the younger populations with consumer-based telcos, mobile telcos particularly, if they can’t get a signal or they can’t make a phone call, they get onto social media and they are trashing the brand.

They’re making noise. A trend is combining fault management and performance management, which are logical partners with social media. All of a sudden, rather than having a couple of systems, you have three.

In our world, we can put 25 or 30 different data sources on to a single Zen platform. In fact, there is no theoretical limit to the number we could, but 20 to 30 is quite typical now. That enables us to manage all the different network elements, different types of mobile technologies, LTE, 3G, and 2G. It could be Ericsson, Nokia, Huawei, ZTE, or Alcatel-Lucent. There is an amazing range of equipment, all currently managed through separate entities. We’re offering a platform to pull it all together in one unit.

The other way I tend to look at it is that we’re trying to turn the telcos into how you might view a human. We take the humans as the best decision-making platforms in the world and we probably still could claim that. As humans, we have conscious and unconscious processes running. We don’t think about breathing or pumping our blood around our system, but it’s happening all the time.

We use a solution with visualization, because in the world of big data, you can’t understand data in numbers.

We have senses that are pulling in massive amount of information from the outside world. You’re listening to me now. You’re probably doing a bunch of other things while you are tapping away on a table as well. They’re getting senses of information there and you are seeing, and hearing, and feeling, and touching, and tasting.

Those all contain information that’s coming into the body, but most of the activity is subconscious. In the world of big data, this is the Zen goal, and what we’re delivering in a number of places is to make as many actions as possible in a telco environment, as in a network environment, come to that automatic, subconscious state.

Suppose I have a problem on a network. I relate it back to the people who need to know, but I don’t require human intervention. We’re looking a position where the human intervention is looking at patterns in that information to decide what they can do intellectually to make the business better.

That probably speaks to another point here. We use a solution with visualization, because in the world of big data, you can’t understand data in numbers. Your human brain isn’t capable of processing enough, but it is capable of identifying patterns of pictures, and that’s where we go with our visualization technology.

Gather and use data

We have a customer who is one of the largest telcos in EMEA. They’re basically taking in 90,000 alarms from the network a day, and that’s their subsidiary companies, all into one environment. But 90,000 alarms needing manual intervention is a very big number.

Using the Zen technology, we’ve been able to reduce that to 10,000 alarms. We’ve effectively taken 90 percent of the manual processing out of that environment. Now, 10,000 is still a lot of alarms to deal with, but it’s a lot less frightening than 90,000, and that’s a real impact in human terms.

Gardner: Now that we understand what you do, let’s get into how you do it. What’s beneath the covers in your Zen system that allows you to confidently say you can take any volume of data you want?

If we need more processing power, we can add more services to scale transparently. That enables us to get any amount of data, which we can then process.

Stubley: Fundamentally, that comes down to the architecture we built for Zen. The first element is our data-integration layer. We have a technology that we developed over the last 10 years specifically to capture data in telco networks. It’s real-time and rugged and it can deal with any volume. That enables us to take anything from the network and push it into our real-time database, which is HP’s Vertica solution, part of the HP HAVEn family.

Vertica analysis is to basically record any amount of data in real time and scale automatically on the HP hardware platform we also use. If we need more processing power, we can add more services to scale transparently. That enables us to get any amount of data, which we can then process.

We have two processing layers. Referring to our earlier discussion about conscious and subconscious activity, our conscious activity is visualizing that data, and that’s done with Tableau.

We have a number of Tableau reports and dashboards with each of our product solutions. That enables us to envision what’s happening and allows the organization, the guys running the network, and the guys looking at different elements in the data to make their own decisions and identify what they might do.

We also have a streaming analytics engine that listens to the data as it comes into the system before it goes to Vertica. If we spot the patterns we’ve identified earlier “subconsciously,” we’ll then act on that data, which may be reducing an alarm count. It may be "actioning" something.

It may be sending someone an email. It may be creating a trouble ticket on a different system. Those all happen transparently and automatically. It’s four layers simplifying the solution: data capture, data integration, visualization, and automatic analytics.

Developing high value

Gardner: And when you have the confidence to scale your underlying architecture and infrastructure, when you are able to visualize and develop high value to a vertical industry like a telco, this allows you to then expand into more lines of business in terms of products and services and also expand into move vertical. Where have you taken this in terms of the Zen family and then where do you take this now in terms of your market opportunity?

Stubley: We focus on mobile telcos. That’s our heritage. We can take any data source from a telco, but we can actually take any data source from anywhere, in any platform and any company. That ranges from binary to HTML. You name it, and if you’ve got data, we could load it.

That means we can build our processing accordingly. What we do is position what we call solution packs, and a solution pack is a connector to the outside world, to the network, and it grabs the data. We’ve got an element of data modeling there, so we can load the data into Vertica. Then, we have already built reports in Tableau that allows us to interrogate automatically. That’s at a component level.

Once you go to a number of components, we can then look horizontally across those different items and look at the behaviors that interact with each other. If you are looking at pure telco terms, we would be looking at different network devices, the end-to-end performance of the network, but the same would apply to a fraud scenario or could apply to someone who is running cable TV.

The very highest level is finding what problem you’re going to solve and then using the data to solve it.

So multi-play players are interesting because they want to monitor what’s happening with TV as well and that will fit in exactly in the same category. Realistically, anybody with high-volume, real-time data can take benefit from Vertica.

Another interesting play in this scenario is social gaming and online advertising. They all have similar data characteristics, very high volume and fixed data that needs to be analyzed and processed automatically.

Why Vertica?

Gardner: How long have you been using Vertica, and what is it that drove you to using it vis-à-vis alternatives?

Stubley: As far as the Zen family goes, we have used other technologies in the past, other relational databases, but we’ve used Vertica now for more than two-and-a-half years. We were looking for a platform that can scale and would give us real-time data. At the volumes we were looking at nothing could compete with Vertica at a sensible price. You can build yourself any solid solution with enough money, but we haven’t got too many customers who are prepared to make that investment.

So Vertica fits in with the technology of the 21st century. A lot of the relational database appliances are using 1980 thought processes. What’s happened with processing in the last few years is that nobody shares memory anymore, and our environment requires a non-shared memory solution. Vertica has been built on that basis. It was scaled without limit.

One of the areas we’re looking at that I mentioned earlier was social media. Social media is a very natural play for Hadoop, and Hadoop is clearly a very cost-effective platform for vast volumes of data at real-time data load, but very slow to analyze.

So the combination with a high-volume, low-cost platform for the bulk of data and a very high performing real-time analytics engine is very compelling. The challenge is going to be moving the data between the two environments. That isn’t going to go away. That’s not simple, and there is a number of approaches. HP Vertica is taking some.

There is Flex Zone, and there are any number of other players in that space. The reality is that you probably reach an environment where people are parallel loading the Hadoop and the Vertica. That’s what we probably plan to do. That gives you much more resilience. So for a lot of the data we’re putting into our system, we’re actually planning to put the raw data files into Hadoop, so we can reload them as necessary to improve the resilience of the overall system, too.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  Andy Stubley  big data  BriefingsDirect  Dana Gardner  data analysis  data analytics  HAVEn  HP  HP Vertica  HPDiscover  Interarbor Solutions  System Mechanics  telco 

Share |
PermalinkComments (0)
 

Health data deluge requires secure information flow via standards, says The Open Group’s new healthcare director

Posted By Dana L Gardner, Tuesday, July 15, 2014

An expected deluge of data and information about patients, providers, outcomes, and needed efficiencies is pushing the healthcare industry to rapid change. But more than dealing with just the volume of data is required. Interoperability, security and the ability to adapt rapidly to the lessons in the data are all essential.

The means of enabling Boundaryless Information Flow, Open Platform 3.0 adaptation, and security for the healthcare industry are then, not surprisingly, headline topics for The Open Group’s upcoming event, Enabling Boundaryless Information Flow on July 21 and 22 in Boston.

And Boston is a hotbed of innovation and adaption for how technology, enterprise architecture, and open standards can improve the communication and collaboration among healthcare ecosystem players.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

In preparation for the conference, BriefingsDirect had the opportunity to interview Jason Lee, the new Healthcare and Security Forums Director at The Open Group. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: I'm looking forward to the Boston conference next week and want to remind our listeners and readers that it's not too late to sign up to attend. You can learn more at www.opengroup.org.

Let’s start by talking about the relationship between Boundaryless Information Flow, which is a major theme of the conference, and healthcare. Healthcare perhaps is the killer application for Boundaryless Information Flow.

Lee: Interesting, I haven’t heard it referred to that way, but healthcare is 17 percent of the US economy. It's upwards of $3 trillion. The costs of healthcare are a problem, not just in the United States, but all over the world, and there are a great number of inefficiencies in the way we practice healthcare.

Lee

We don’t necessarily intend to be inefficient, but there are so many places and people involved in healthcare, it's very difficult to get them to speak the same language. It's almost as if you're in a large house with lots of different rooms, and  every room you walk into they speak a different language. To get information to flow from one room to the other requires some active efforts, and that’s what we're undertaking here at The Open Group.

Gardner: What is it about the current collaboration approaches that don’t work? Obviously, healthcare has been around for a long time and there have been different players involved. What are the hurdles? What prevents a nice, seamless, easy flow and collaboration in information that creates better outcomes? What’s the holdup?

Many barriers

Lee: There are many ways to answer that question, because there are many barriers. Perhaps the simplest is the transformation of healthcare from a paper-based industry to a digital industry. Everyone has walked into a medical office, looked behind the people at the front desk, and seen file upon file and row upon row of folders, information that’s kept in a written format.

When there's been movement toward digitizing that information, not everyone has used the same system. It's almost like trains running on different gauge track. Obviously if the track going east to west is a different gauge than going north to south, then trains aren’t going to be able to travel on those same tracks. In the same way, healthcare information does not flow easily from one office to another or from one provider to another.

Gardner: So not only do we have disparate strategies for collecting and communicating health data, but we're also seeing much larger amounts of data coming from a variety of new and different places. Some of them now even involve sensors inside of patients themselves or devices that people will wear. So is the data deluge, the volume, also an issue here?

Lee: Certainly. I heard recently that an integrated health plan, which has multiple hospitals involved, contains more elements of data than the Library of Congress. As information is collected at multiple points in time, over a relatively short period of time, you really do have a data deluge. Figuring out how to find your way through all the data and look at the most relevant [information] for the patient is a great challenge.

Gardner: I suppose the bad news is that there is this deluge of data, but it’s also good news, because more data means more opportunity for analysis, a better ability to predict and determine best practices, and also provide overall lower costs with better patient care.

We, like others, put a great deal of effort into describing the problems, but figuring out how to bring IT technologies to bear on business problems.

So it seems like the stakes are rather high here to get this right, to not just crumble under a volume or an avalanche of data, but to master it, because it's perhaps the future. The solution is somewhere in there, too.

Lee: No question about it. At The Open Group, our focus is on solutions. We, like others, put a great deal of effort into describing the problems, but figuring out how to bring IT technologies to bear on business problems, how to encourage different parts of organizations to speak to one another and across organizations to speak the same language, and to operate using common standards and language. That’s really what we're all about.

And it is, in a large sense, part of the process of helping to bring healthcare into the 21st Century. A number of industries are a couple of decades ahead of healthcare in the way they use large datasets -- big data, some people refer to it as. I'm talking about companies like big department stores and large online retailers. They really have stepped up to the plate and are using that deluge of data in ways that are very beneficial to them -- and healthcare can do the same. We're just not quite at the same level of evolution.

Gardner: And to your point, the stakes are so much higher. Retail is, of course, a big deal in the economy, but as you pointed out, healthcare is such a much larger segment. So just making modest improvements in communication, collaboration, or data analysis can reap huge rewards.

Quality side

Lee: Absolutely true. There is the cost side of things, but there is also the quality side. So there are many ways in which healthcare can improve through standardization and coordinated development, using modern technology that cannot just reduce cost, but improve quality at the same time.

Gardner: I'd like to get into a few of the hotter trends. But before we do, it seems that The Open Group has recognized the importance here by devoting the entire second day of their conference in Boston, that will be on July 22, to healthcare.

Maybe you could provide us a brief overview of what participants, and even those who come in online and view recorded sessions of the conference at http://new.livestream.com/opengroup should expect? What’s going to go on July 22?

Lee: We have a packed day. We're very excited to have Dr. Joe Kvedar, a physician at Partners HealthCare and Founding Director of the Center for Connected Health, as our first plenary speaker. The title of his presentation is “Making Health Additive.”

It will become an area where standards development and The Open Group can be very helpful.

Dr. Kvedar is a widely respected expert on mobile health, which is currently the Healthcare Forum’s top work priority.  As mobile medical devices become ever more available and diversified, they will enable consumers to know more about their own health and wellness. 

A great deal of data of potentially useful health data will be generated.  How this information can be used -- not just by consumers but also by the healthcare establishment that takes care of them as patients -- will become a question of increasing importance. It will become an area where standards development and The Open Group can be very helpful.

Our second plenary speaker, Proteus Duxbury, Chief Technology Officer at Connect for Health Colorado, will discuss a major feature of the Affordable Care Act — the health insurance exchanges -- which are designed to bring health insurance to tens of millions of people who previous did not have access to it. 

He is going to talk about how enterprise architecture -- which is really about getting to solutions by helping the IT folks talk to the business folks and vice versa -- has helped the State of Colorado develop their health insurance exchange.

After the plenaries, we will break up into three tracks, one of which is healthcare-focused. In this track there will be three presentations, all of which discuss how enterprise architecture and the approach to Boundaryless Information Flow can help healthcare and healthcare decision-makers become more effective and efficient.

Care delivery

One presentation will focus on the transformation of care delivery at the Visiting Nurse Service of New York. Another will address stewarding healthcare transformation using enterprise architecture, focusing on one of our platinum members, Oracle, and a company called Intelligent Medical Objects, and how they're working together in a productive way, bringing IT and healthcare decision-making together.

Then, the final presentation in this track will focus on the development of an enterprise architecture-based solution at an insurance company. The payers, or the insurers -- the big companies that are responsible for paying bills and collecting premiums -- have a very important role in the healthcare system that extends beyond administration of benefits. Yet, payers are not always recognized for their key responsibilities and capabilities in the area of clinical improvements and cost improvements.

With the increase in payer data brought on in large part by the adoption of a new coding system -- the ICD-10 -- which will come online this year, there will be a huge amount of additional data, including clinical data, that become available. At The Open Group, we consider payers -- health insurance companies (some of which are integrated with providers) -- as very important stakeholders in the big picture.

In the afternoon, we're going to switch gears a bit and have a speaker talk about the challenges, the barriers, the “pain points” in introducing new technology into the healthcare systems. The focus will return to remote or mobile medical devices and the predictable but challenging barriers to getting newly generated health information to flow to doctors’ offices and into patients records, electronic health records, and hospitals' data-keeping and data-sharing systems.

Payers are not always  recognized for their key responsibilities and capabilities in the area of clinical improvements and cost improvements.

We'll have a panel of experts that responds to these pain points, these challenges, and then we'll draw heavily from the audience, who we believe will be very, very helpful, because they bring a great deal of expertise in guiding us in our work. So we're very much looking forward to the afternoon as well.

Gardner: I'd also like to remind our readers and listeners that they can take part in this by attending the conference, and there is information about that at the opengroup.org website.

It's really interesting. A couple of these different plenaries and discussions in the afternoon come back to this user-generated data. Jason, we really seem to be on the cusp of a whole new level of information that people will be able to develop from themselves through their lifestyle, new devices that are connected.

We hear from folks like Apple, Samsung, Google, and Microsoft. They're all pulling together information and making it easier for people to not only monitor their exercise, but their diet, and maybe even start to use sensors to keep track of blood sugar levels, for example.

In fact, a new Flurry Analytics survey showed 62 percent increase in the use of health and fitness application over the last six months on the popular mobile devices. This compares to a 33 percent increase in other applications in general. So there's an 87 percent faster uptick in the use of health and fitness applications.

Tell me a little bit how you see this factoring in. Is this a mixed blessing? Will so much data generated from people in addition to the electronic medical records, for example, be a bad thing? Is this going to be a garbage in, garbage out, or is this something that could potentially be a game changer in terms of how people react to their own data -- and then bring more data into the interactions they have with healthcare providers?

Challenge to predict

Lee: It's always a challenge to predict what the market is going to do, but I think that’s a remarkable statistic that you cited. My prediction is that the increased volume of person-generated data from mobile health devices is going to be a game changer. This view also reflects how the Healthcare Forum members (which includes members from Capgemini, Philips, IBM, Oracle and HP) view the future.

The commercial demand for mobile medical devices, things that can be worn, embedded, or swallowed, as in pills, as you mentioned, is growing ever more. The software and the applications that will be developed to be used with the devices is going to grow by leaps and bounds.

As you say, there are big players getting involved. Already some of the pedometer-type devices that measure the number of steps taken in a day have captured the interest of many, many people. Even David Sedaris, serious guy that he is, was writing about it recently in The New Yorker.

What we will find is that many of the health indicators that we used to have to go to the doctor or nurse or lab to get information on will become available to us through these remote devices.

There are already problems around interoperability and connectivity of information in the healthcare establishment as it is now.

There will be a question of course as to reliability and validity of the information, to your point about garbage in, garbage out, but I think standards development will help here This, again, is where The Open Group comes in. We might also see the FDA exercising its role in ensuring safety here, as well as other organizations, in determining which devices are reliable.

The Open Group is working in the area of mobile data and information systems that are developed around them, and their ability to (a) talk to one another, and (b) talk to the data devices/infrastructure used in doctors’ offices and in hospitals. This is called interoperability and it's certainly lacking in the country.

There are already problems around interoperability and connectivity of information in the healthcare establishment as it is now. When patients and consumers start collecting their own data, and the patient is put at the center of the nexus of healthcare, then the question becomes how does that information that patients collect get back to the doctor/clinician in ways in which the data can be trusted and where the data are helpful?

After all, if a patient is wearing a medical device, there is the opportunity to collect data, about blood-sugar level let's say, throughout the day. And this is really taking healthcare outside of the four walls of the clinic and bringing information to bear that can be very, very useful to clinicians and beneficial to patients.

In short, the rapid market dynamic in mobile medical devices and in the software and hardware that facilitates interoperability begs for standards-based solutions that reduce costs and improve quality, and all of which puts the patient at the center. This is The Open Group’s Healthcare Forum’s sweet spot.

Game changer

Gardner: It seems to me a real potential game changer as well, and that something like Boundaryless Information Flow and standards will play an essential role in. Because one of the big question marks with many of the ailments in a modern society has to do with lifestyle and behavior.

So often, the providers of the care only really have the patient’s responses to questions, but imagine having a trove of data at their disposal, a 360-degree view of the patient to then further the cause of understanding what's really going on, on a day-to-day basis.

But then, it's also having a two-way street, being able to deliver perhaps in an automated fashion reinforcements and incentives, information back to the patient in real-time about behavior and lifestyles. So it strikes me as something quite promising, and I look forward to hearing more about it at the Boston conference.

Any other thoughts on this issue about patient flow of data, not just among and between providers and payers, for example, or providers in an ecosystem of care, but with the patient as the center of it all, as you said?

Lee: As more mobile medical devices come to the market, we'll find that consumers own multiple types of devices at least some of which collect multiple types of data. So even for the patient, being at the center of their own healthcare information collection, there can be barriers to having one device talk to the other. If a patient wants to keep their own personal health record, there may be difficulties in bringing all that information into one place.

There are issues, around security in particular, where healthcare will be at the leading edge.

So the interoperability issue, the need for standards, guidelines, and voluntary consensus among stakeholders about how information is represented becomes an issue, not just between patients and their providers, but for individual consumers as well.

Gardner: And also the cloud providers. There will be a variety of large organizations with cloud-modeled services, and they are going to need to be, in some fashion, brought together, so that a complete 360-degree view of the patient is available when needed. It's going to be an interesting time.

Of course, we've also looked at many other industries and tried to have a cloud synergy, a cloud-of-clouds approach to data and also the transaction. So it’s interesting how what's going on in multiple industries is common, but it strikes me that, again, the scale and the impact of the healthcare industry makes it a leader now, and perhaps a driver for some of these long overdue structured and standardized activities.

Lee: It could become a leader. There is no question about it. Moreover, there is a lot healthcare can learn from other companies, from mistakes that other companies have made, from lessons they have learned, from best practices they have developed (both on the content and process side). And there are issues, around security in particular, where healthcare will be at the leading edge in trying to figure out how much is enough, how much is too much, and what kinds of solutions work.

There's a great future ahead here. It's not going to be without bumps in the road, but organizations like The Open Group are designed and experienced to help multiple stakeholders come together and have the conversations that they need to have in order to push forward and solve some of these problems.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  enterprise architecture  healthcare  Interarbor Solutions  Jason Lee  The Open Group  The Open Group Conference 

Share |
PermalinkComments (0)
 

HP network management heightens performance while reducing total costs for Nordic telco TDC

Posted By Dana L Gardner, Monday, July 14, 2014

When Swedish communications services provider TDC needed network infrastructure improvements from their disparate networks across several Nordic countries, they needed both simplicity in execution and agility in performance.

Our next innovation case study interview therefore highlights how TDC in Stockholm found ways to better determine root causes to any network disruption, and conduct deep inspection of the traffic to best manage their service-level agreements (SLAs).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

BriefingsDirect had an opportunity to learn first-hand how over 50,000 devices can be monitored and managed across a state-of-the-art network when we interviewed Lars Niklasson, the Senior Consultant at TDC. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: You have a number of main businesses in your organization. There’s TDC Solutions and mobile. There’s even television and some other hosting. Explain for us how large your organization is.

Niklasson: TDC is an operator in the Nordic region, where we have a network covering Norway, Sweden, Finland, and Denmark. In Sweden, we’re also an integrator and have a quite big consultant role in Sweden. In Sweden we’re around 800 people, and the whole TDC group is almost 10,000 people.

Niklasson

Gardner: So it’s obviously a very significant network to support this business and deliver the telecommunication services. Maybe you could define your network for us.

Niklasson: It's quite big, over 50,000 devices, and everything is monitored of course. It’s a state-of-the-art network.

Gardner: When you have so many devices to track, so many types of layers of activity and levels of network operations, how do you approach keeping track of that and making sure that you’re not only performing well, but performing efficiently?

Niklasson: Many years ago, we implemented HP Network Node Manager (NNM) and we have several network operating centers in all countries using NNM. When HP released different smart plug-ins, we started to implement those too for the different areas that they support, such as quality assurance, traffic, and so on.

Gardner: So you’ve been using HP for your network management and HP Network Management Center for some time, and it has of course evolved over the years. What are some of the chief attributes that you like or requirements that you have for network operations, and why has the HP product been so strong for you?

Quick and easy

Niklasson: One thing is that it has to be quick and easy to manage. We have lots of changes all the time, especially in Sweden, when a customer comes. And in Sweden, we’re monitoring end customers’ networks.

It's also very important to be able to integrate it with the other systems that we have. So we can, for example, tell which service-level agreement (SLA) a particular device has and things like that. NNM makes this quite efficient.

Gardner: One of the things that I’ve heard people struggle with is the amount of data that’s generated from networks that then they need to be able to sift through and discover anomalies. Is there something about visualization or other ways of digesting so much data that appeals to you?

Niklasson: NNM is quite good at finding the root cause. You don’t get very many incidents when something happens. If I look back at other products and older versions, there were lots and lots of incidents and alarms. Now, I find it quite easy to manage and configure NNM so it's monitoring the correct things and listening to the correct traps and so on.

Gardner: TDC uses network management capabilities and also sells it. They also provide it with their telecom services. How have you experienced the use in the field? Do any of your customers also manage their own networks and how has this been for your consumers of network services?

Niklasson: We’re also an HP partner in selling NNM to end customers. Part of my work is helping customers implement this in their own environment. Sometimes a customer doesn’t want to do that. They buy the service from us, and we monitor the network. It’s for different reasons. One could be security, and they don’t allow us to access the network remotely. They prefer to have it in-house, and I help them with these projects.

Now, I find it quite easy to manage and configure NNM so it's monitoring the correct things and listening to the correct traps.

Gardner: Lars, looking to the future, are there any particular types of technology improvements that you would like to see or have you heard about some of the roadmaps that HP has for the whole Network Management Center Suite? What interests you in terms of what's next?

Niklasson: I would say two things. One is the application visibility in the network, where we can have some of that with traffic that’s cleaner, but it's still NetFlow-based. So I’m interested in seeing more deep inspection of the traffic and also more virtualization of the virtual environments that we have.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  HP  HPDiscover  Interarbor Solutions  Lars Niklasson  Network Management  Network node management  TDC 

Share |
PermalinkComments (0)
 

Panel tackles how to make mobile devices as secure as they are indispensable

Posted By Dana L Gardner, Wednesday, July 09, 2014

As smartphones have become de rigueur in the global digital economy, users want them to do more work, and businesses want them to be more productive for their employees -- as well as powerful added channels to consumers.

But neither businesses nor mobile-service providers have a cross-domain architecture that supports all the new requirements for a secure digital economy, one that allows safe commerce, data sharing and user privacy.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.

So how do we blaze a better path to a secure mobile future? How do we make today’s ubiquitous mobile devices as low risk as they are indispensable?

BriefingsDirect recently posed these and other questions to a panel of experts on mobile security: Paul Madsen, Principal Technical Architect in the Office of the CTO at Ping Identity; Michael Barrett, President of the FIDO (Fast Identity Online) Alliance, and Mark Diodati, a Technical Director in the Office of the CTO at Ping Identity. The sponsored panel discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We're approaching this Cloud Identity Summit 2014 (CIS) in Monterey, Calif. on July 19 and we still find that the digital economy is not really reaching its full potential. We're still dealing with ongoing challenges for trust, security, and governance across mobile devices and network.

Even though people have been using mobile devices for decades—and in some markets around the world they're the primary tool for accessing the Internet—why are we still having problems? Why is this so difficult to solve?

Diodati: There are so many puzzle pieces to make the digital economy fully efficient. A couple of challenges come to mind. One is the distribution of identity. In prior years, the enterprise did a decent job -- not an amazing job, but a decent job -- of identifying users, authenticating them, and figuring out what they have access to.

Once you move out into a broader digital economy, you start talking about off-premises architectures and the expansion of user constituencies. There is a close relationship with your partners, employees, and your contractors. But relationships can be more distant, like with your customers.

Emerging threats

Additionally, there are issues with emerging security threats. In many cases, there are fraudsters with malware being very successful at taking people’s identities and stealing money from them.

Diodati

Mobility can do a couple of things for us. In the old days, if you want more identity assurance to access important applications, you pay more in cost and usability problems. Specialized hardware was used to raise assurance. Now, the smartphone is really just a portable biometric device that users carry without us asking them to do so. We can raise assurance levels without the draconian increase in cost and usability problems.

We’re not out of the woods yet. One of the challenges is nailing down the basic administrative processes to bind user identities to mobile devices. That challenge is part cultural and part technology. [See more on a new vision for identity.]

Gardner: So it seems that we have a larger set of variables, end users, are not captive on network, who we authenticate. As you mentioned, the mobile device, the smartphone, can be biometric and can be an even better authenticator than we've had in the past. We might actually be in a better position in a couple of years. Is there a transition that’s now afoot that we might actually come out better on the other end?

Madsen: The opportunities are clear. As Mark indicated, the phones, not just because of its technical features, but because of the relatively tight binding that users feel for them, make a really strong authentication factor.

Madsen

It's the old trope of something you have, something you know, and something you are. Phones are something you already have, from the user’s point of view. It’s not an additional hard token or hard USB token that we're asking employees to carry with them. It's something they want to carry, particularly if it's a BYOD phone.

So phones, because they're connected mobile computers, make a really strong second-factor authentication, and we're seeing that more and more. As I said, it’s one that users are happy using because of the relationship they already have with their phones, for all the other reasons. [See more on identity standards and APIs.]

Gardner: It certainly seems to make sense that you would authenticate into your work environment through your phone. You might authenticate in the airport to check in with your phone and you might use it for other sorts of commerce. It seems that we have the idea, but we need to get there somehow.

What’s architecturally missing for us to make this transition of the phone as the primary way in which people are identified session by session, place by place? Michael, any thoughts about that?

User experience

Barrett: There are a couple of things. One, in today’s world, we don’t yet have open standards that help to drive cross-platform authentication, and we don’t have the right architecture for that. In today’s world still, if you are using a phone with a virtual keyboard, you're forced to type this dreadful, unreadable tiny password on the keyboard, and by the way, you can’t actually read what you just typed. That’s a pretty miserable user experience, which we alluded to earlier.

Barrett

But also, it’s a very ugly. It’s a mainframe-centric architecture. The notion that the authentication credentials are shared secrets that you know and that are stored on some central server is a very, very 1960s approach to the world. My own belief is that, in fact, we have to move towards a much more device-centric authentication model, where the remote server actually doesn’t know your authentication credentials. Again, that comes back to both architecture and standards.

My own view is that if we put those in place, the world will change. Many of us remember the happy days of the late '80s and early '90s when offices were getting wired up, and we had client-server applications everywhere. Then, HTML and HTTP came along, and the world changed. We're looking at the same kind of change, driven by the right set of appropriately designed open standards.

Gardner: So standards, behavior, and technology make for an interesting adoption path, sometimes a chicken and the egg relationship. Tell me about FIDO and perhaps any thoughts about how we make this transition and adoption happen sooner rather than later?

Barrett: I gave a little hint. FIDO is an open-standards organization really aiming to develop a set of technical standards to enable device-centric authentication that is easier for end users to use. As an ex-CTO, I can tell you the experience when you try to give them stronger authenticators that are harder for them to use. They won’t voluntarily use them.

FIDO is an open-standards organization really aiming to develop a set of technical standards to enable device-centric authentication that is easier for end users to use.

We have to do better than we're doing today in terms of ease of use of authentication. We also have to come up with authentication that is stronger for the relying parties, because that’s the other face of this particular coin. In today’s world, passwords and pins work very badly for end users. They actually work brilliantly for the criminals. 

So I'm kind of old school on this. I tend to think that security controls should be there to make life better for relying parties and users and not for criminals. Unfortunately, in today’s world, they're kind of inverted.

So FIDO is simply an open-standards organization that is building and defining those classes of standards and, through our member companies, is promulgating deployment of those standards.

Madsen: I think FIDO is important. Beyond the fact that it’s a standard is the pattern that it’s normalizing. The pattern is one where the user logically authenticates to their phone, whether it be with a fingerprint or a pin, but the authentication is local. Then, leveraging the phone’s capabilities -- storage, crypto, connectivity. etc. -- the phone authenticates to the server. It’s that pattern of a local authentication followed by a server authentication that I think we are going to see over and over.

Gardner: Thank you, Paul. It seems to me that most people are onboard with this. I know that, as a user, I'm happy to have the device authenticate. I think developers would love to have this authentication move to a context on a network or with other variables brought to bear. They can create whole new richer services when they have a context for participation. It seems to me the enterprises are onboard too. So there's a lot of potential momentum around this. What does it take now to move the needle forward? What should we expect to hear at CIS?

Moving forward

Diodati: There are two dimensions to moving the needle forward: avoiding the failures of prior mobile authentication systems, and ensuring that modern authentication systems support critical applications. Both are crucial to the success of any authentication system, including FIDO.

At CIS, we have an in-depth, three-hour FIDO workshop and many mobile authentication sessions. 

There are a couple of things that I like about FIDO. First, it can use the biometric capabilities of the device. Many smart phones have an accelerometer, a camera, and a microphone. We can get a really good initial authentication. Also, FIDO leverages public-key technology, which overcomes some of the concerns we have around other kinds of technologies, particularly one-time passwords. 

Madsen: To that last point Mark, I think FIDO and SAML, or more recent federation protocols, complement each other wonderfully. FIDO is a great authentication technology, and federation historically has not resolved that. Federation didn't claim to answer that issue, but if you put the two together, you get a very strong initial authentication. Then, you're able to broadcast that out to the applications that you want to access. And that’s a strong combination.

Barrett: One of the things that we haven't really mentioned here -- and Paul just hinted at it -- is the relationship between single sign-on and authentication. When you talk to many organizations, they look at that as two different sides of the same coin. So the better application or ubiquity you can get, and the more applications you can sign the user on with less interaction, is a good thing.

Gardner: Before we go a little bit deeper into what’s coming up, let’s take another pause and look back. There have been some attempts to solve these problems. Many, I suppose, have been from a perspective of a particular vendor or a type of device or platform or, in an enterprise sense, using what they already know or have.

Proprietary technology is really great for many things, but there are certain domains that simply need a strong standards-based backplane.

We've had containerization and virtualization on the mobile tier. It is, in a sense, going back to the past where you go right to the server and very little is done on the device other than the connection. App wrapping would fall under that as well, I suppose. What have been the pros and cons and why isn’t containerization enough to solve this problem? Let’s start with Michael.

Barrett: If you look back historically, what we've tended to see are lot of attempts that are truly proprietary in nature. Again, my own philosophy on this is that proprietary technology is really great for many things, but there are certain domains that simply need a strong standards-based backplane.

There really hasn't been an attempt at this for some years. Pretty much, we have to go back to X.509 to see the last major standards-based push at solving authentication. But X.509 came with a whole bunch of baggage, as well as architectural assumptions around a very disconnected world view that is kind of antithetical to where we are today, where we have a very largely connected world view.

I tend to think of it through that particular set of lenses, which is that the standards attempts in this area are old, and many of the approaches that have been tried over the last decade have been proprietary.

For example, on my old team at PayPal, I had a small group of folks who surveyed security vendors. I remember asking them to tell me how many authentication vendors there were and to plot that for me by year?

Growing number of vendors

They sighed heavily, because their database wasn’t organized that way, but then came back a couple of weeks later. Essentially they said that in 2007, it was 30-odd vendors, and it has been going up by about a dozen a year, plus or minus some, ever since, and we're now comfortably at more than 100.

Any market that has 100 vendors, none of whose products interoperate with each other, is a failing market, because none of those vendors, bar only a couple, can claim very large market share. This is just a market where we haven’t seen the right kind of approaches deployed, and as a result, we're struck where we are today without doing something different.

Gardner: Paul, any thoughts on containerization, pros and cons?

Madsen: I think of phones as almost two completely orthogonal aspects. First is how you can leverage the phone to authenticate the user. Whether it’s FIDO or something proprietary, there's value in that.

Secondly is the phone as an application platform, a means to access potentially sensitive applications. What mobile applications introduce that’s somewhat novel is the idea of pulling down that sensitive business data to the device, where it can be more easily lost or stolen, given the mobility and the size of those devices.

IT, arguably and justifiably, wants to protect the business data on it, but the employee, particularly in a BYOD case, wants to keep their use of the phone isolated and private.

The challenge for the enterprise is, if you want to enable your employees with devices, or enable them to bring their own in, how do you protect that data. It seems more and more important, or recognized as the challenge, that you can’t.

The challenge is not only protecting the data, but keeping the usage of the phone separate. IT, arguably and justifiably, wants to protect the business data on it, but the employee, particularly in a BYOD case, wants to keep their use of the phone isolated and private.

So containerization or dual-persona systems attempt to slice and dice the phone up into two or more pieces. What is missing from those models, and it’s changing, is a recognition that, by definition, that’s an identity problem. You have two identities—the business user and the personal user—who want to use the same device, and you want to compartmentalize those two identities, for both security and privacy reasons.

Identity standards and technologies could play a real role in keeping those pieces separate.The employee might use Box for the business usage, but might also use it for personal usage. That’s an identity problem, and identity will keep those two applications and their usages separate.

Diodati: To build on that a little bit, if you take a look at the history of containerization, there were some technical problems and some usability problems. There was a lack of usability that drove an acceptance problem within a lot of enterprises. That’s changing over time.

To talk about what Michael was talking about in terms of the failure of other standardized approaches to authentication, you could look back at OATH, which is maybe the last big industry push, 2004-2005, to try to come up with a standard approach, and it failed on interoperability. OATH was a one-time password, multi-vendor  capability. But in the end, you really couldn’t mix and match devices. Interoperability is going to be a big, big criteria for acceptance of FIDO. [See more on identity standards and APIs.]

Mobile device management

Gardner: Another thing out there in the market now, and it has gotten quite a bit of attention from enterprises as they are trying to work through this, is mobile device management (MDM).  Do you have any thoughts, Mark, on why that has not necessarily worked out or won’t work out? What are the pros and cons of MDM?

Diodati: Most organizations of a certain size are going to need an enterprise mobility management solution. There is a whole lot that happens behind the scenes in terms of binding the user's identity, perhaps putting a certificate on the phone.

Michael talked about X.509. That appears to be the lowest common denominator for authentication from a mobile device today, but that can change over time. We need ways to be able to authenticate users, perhaps issue them certificates on the phone, so that we can do things like IPSec.

Also, we may be required to give some users access to offline secured data. That’s a combination of apps and enterprise mobility management (EMM) technology. In a lot of cases, there's an EMM gateway that can really help with giving offline secure access to things that might be stored on network file shares or in SharePoint, for example.

If there's been a stumbling block with EMM, it's just been that the heterogeneity of the devices, making it a challenge to implement a common set of policies.

The fundamental issue with MDM is, as the name suggests, that you're trying to manage the device, as opposed to applications or data on the device.

But also the technology of EMM had to mature. We went from BlackBerry Enterprise Server, which did a pretty good job in a homogeneous world, but maybe didn't address everybody’s needs. The AirWatchs and the Mobile Irons of the world, they've had to deal with heterogeneity and increased functionality.

Madsen: The fundamental issue with MDM is, as the name suggests, that you're trying to manage the device, as opposed to applications or data on the device. That worked okay when the enterprise was providing employees with their BlackBerry, but it's hard to reconcile in the BYOD world, where users are bringing in their own iPhones or Androids. In their mind, they have a completely justified right to use that phone for personal applications and usage.

So some of the mechanisms of MDM remain relevant, being able to wipe data off the phone, for example, but the device is no longer the appropriate granularity. It's some portion of the device that the enterprise is authoritative over.

Gardner: It seems to me, though, that we keep coming back to several key concepts: authentication and identity, and then, of course, a standardization approach that ameliorates those interoperability and heterogeneity issues. [See more on a new vision for identity.]

So let’s look at identity and authentication. Some people make them interchangeable. How should we best understand them as being distinct? What’s the relationship between them and why are they so essential for us to move to a new architecture for solving these issues? Let’s start with you, Michael.

Identity is center

Barrett: I was thinking about this earlier. I remember having some arguments with Phil Becker back in the early 2000s when I was running the Liberty Alliance, which was the standards organization that came up with SAML 2.0. Phil coined that phrase, "Identity is center," and he used to argue that essentially everything fell under identity.

What I thought back then, and still largely do, is that identity is a broad and complex domain. In a sense, as we've let it grow today, they're not the same thing. Authentication is definitely a sub-domain of security, along with a whole number of others. We talked about containerization earlier, which is a kind of security-isolation technique in many regards. But I am not sure that identity and authentication are exactly in the same dimension.

In fact, the way I would describe it is that if we talk about something like the levels-of-assurance model, we're all fairly familiar with in the identity sense. Today, if you look at that, that’s got authentication and identity verification concepts bound together.

Today, we've collapsed them together, and I am not sure we have actually done anybody any favors by doing that.

In fact, I suspect that in the coming year or two, we're probably going to have to decouple those and say that it’s not really a linear one-dimensonal thing, with level one, level two, level three, and level four. Rather it's a kind of two-dimensional metric, where we have identity verification concepts on one side and then authentication comes from the other. Today, we've collapsed them together, and I am not sure we have actually done anybody any favors by doing that.

Definitely, they're closely related. You can look at some of the difficulties that we've had with identity over the last decade and say that it’s because we actually ignored the authentication aspect. But I'm not sure they're the same thing intrinsically. 

Gardner: Interesting. I've heard people say that any high-level security mobile device has to be about identity. How else could it possibly work? Authentication has to be part of that, but identity seems to be getting more traction in terms of a way to solve these issues across all other variables and to be able to adjust accordingly over time and even automate by a policy.

Mark, how do you see identity and authentication? How important is identity as a new vision for solving these problems?

Diodati: You would have to put security at the top, and identity would be a subset of things that happen within security. Identity includes authorization -- determining if the user is authorized to access the data. It also includes provisioning. How do we manipulate user identities within critical systems -- there is never one big identity in the sky. Identity includes authentication and a couple of other things.

To answer the second part of your question, Dana, in the role of identity and trying to solve these problems, we in the identity community have missed some opportunities in the past to talk about identity as the great enabler.

With mobile devices, we want to have the ability to enforce basic security controls , but it’s really about identity. Identity can enable so many great things to happen, not only just for enterprises, but within the digital economy at large. There's a lot of opportunity if we can orient identity as an enabler.

Authentication and identity

Madsen: I just think authentication is something we have to do to get to identity. If there were no bad people in the world and if people didn’t lie, we wouldn’t need authentication.

We would all have a single identifier, we would present ourselves, and nobody else would lay claim to that identifier. There would be no need for strong authentication. But we don’t live there. Identity is fundamental, and authentication is how we lay claim to a particular identity.

Diodati: You can build the world's best authorization policies. But they are completely worthless, unless you've done the authentication right, because you have zero confidence that the users are who they say there are.

Gardner: So, I assume that multifactor authentication also is in the subset. It’s just a way  of doing it better or more broadly, and more variables and devices that can be brought to bear. Is that correct?

Madsen: Indeed.

We have to apply a set of adaptive techniques to get better identity assurance about the user.

Diodati: The definition of multifactor has evolved over time too. In the past, we talked about “strong authentication”. What we mean was “two-factor authentication,” and that is really changing, particularly when you look at some of the emerging technologies like FIDO.

If you have to look at the broader trends around adaptive authentication, the relationship to the user or the consumer is more distant. We have to apply a set of adaptive techniques to get better identity assurance about the user.

Gardner: I'm just going to make a broad assumption here that the authentication part of this does get solved, that multifactor authentication, adaptive, using devices that people are familiar with, that they are comfortable doing, even continuing to use many of the passwords, single sign-on, all that gets somehow rationalized.

Then, we're elevated to this notion of identity. How do we then manage that identity across these domains? Is there a central repository? Is there a federation? How would a standard come to bear on that major problem of the federation issue, control, and management and updating and so forth? Let’s go back to Michael on that.

Barrett: I tend to start from a couple of different perspectives on this. One is that we do have to fix the authentication standards problem, and that's essentially what FIDO is trying to do.

So, if you accept that FIDO solves authentication, what you are left with is an evolution of a set of standards that, over the last dozen years or so, starting with SAML 2.0, but then going on up through the more recent things like OpenID Connect and OAuth 2.0, and so on, gives you a robust backplane for building whatever business arrangement is appropriate, given the problem you are trying to solve.

Liability

I chose the word "business" quite consciously in there, because it’s fair to say that there are certain classes of models that have stalled out commercially for a whole bunch of reasons, particularly around the dreaded L-word, i.e, liability.

We tried to build things that were too complicated. We could just describe this grand long-term vision of what the universe looked like. Andrew Nash is very fond of saying that we can describe this rich ecosystem as identity-enabled services and so on, but you can’t get there from here, which is the punch line of a rather old joke.

Gardner: Mark, we understand that identity is taking on a whole new level of importance. Are there some examples that we can look to that illustrate how an identity-centric approach to security, governance, manageability for mobile tier activities, even ways it can help developers bring new application programming interfaces (APIs) into play and context for commerce and location, are things we haven’t even scratched the surface of yet really?

Identity is pretty broad when you take a look at the different disciplines that might be at play.

Help me understand, through an example rather than telling, how identity fits into this and what we might expect identity to do if all these things can be managed, standards, and so forth.

Diodati: Identity is pretty broad when you take a look at the different disciplines that might be at play. Let’s see if we can pick out a few.

We have spoken about authentication a lot. Emerging standards like FIDO are important, so that we can support applications that require higher assurance levels with less cost and usability problems.

A difficult trend to ignore is the API-first development modality. We're talking about things like OAuth and OpenID Connect. Both of those are very important, critical standards when we start talking about the use of API- and even non-API HTTP based stuff.

OpenID Connect, in particular, gives us some abilities for users to find where they want to authenticate and give them access to the data they need. The challenge is that the mobile app is interacting on behalf of a user. How do you actually apply things like adaptive techniques to an API session to raise identity assurance levels? Given that OpenID Connect was just ratified earlier this year, we're still in early stages of how that’s going to play out.

Gardner: Michael, any thoughts on examples, use cases, a vision for how this should work in the not too distant future?

Barrett: I'm a great believer in open standards, as I think I have shown throughout the course of this discussion. I think that OpenID Connect, in particular, and the fact that we now have that standard ratified, [is useful]. I do believe that the standards, to a very large extent, allow the creation of deployments that will address those use-cases that have been really quite difficult [without these standards in place].

Ahead of demand

The problem that you want to avoid, of course, is that you don’t want a standard to show up too far ahead of the demand. Otherwise, what you wind up with is just some interesting specification that never gets implemented, and nobody ever bothers deploying any of the implementations of it.

So, I believe in just-in-time standards development. As an industry, identity has matured a lot over the last dozen years. When SAML 2.0 came along in Shibboleth, it was a very federation-centric world, addressing a very small class of use cases. Now, we have a more robust sets of standards. What’s going to be really interesting is to see, how those new standards get used to address use cases that the previous standards really couldn’t?

I'm a bit of a believer in sort of Darwinian evolution on this stuff and that, in fact, it’s hard to predict the future now. Niels Bohr famously said, "Prediction is hard, especially when it involves the future.” There is a great deal of truth to that.

Prediction is hard, especially when it involves the future.

Gardner: Hopefully we will get some clear insights at the Cloud Identity Summit this month, July 19, and there will be more information to be had there.

I also wonder whether we're almost past the point now when we talk about mobile security, cloud security, data-center security. Are we going to get past that, or is this going to become more of a fabric of security that the standards help to define and then the implementations make concrete? Before we sign off, Mark, any last thoughts about moving beyond segments of security into a more pervasive concept of security?

Diodati: We're already starting to see that, where people are moving towards software as a service (SaaS) and moving away from on-premises applications. Why? A couple of reasons. The revenue and expense model lines up really well with what they are doing, they pay as they grow. There's not a big bang of initial investment. Also, SaaS is turnkey, which means that much of the security lifting is done by the vendor.

That's also certainly true with infrastructure as a service (IaaS). If you look at things like Amazon Web Services (AWS). It is more complicated than SaaS, it is a way to converge security functions within the cloud.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.

You may also be interested in:

Tags:  BriefingsDirect  Cloud Identity Summit  Dana Gardner  Interarbor Solutions  Mark Diodati  Michael Barrett  mobile computing  mobile devices  OAuth  Paul Madsen  Ping Identity 

Share |
PermalinkComments (0)
 

As the digital economy ramps up, expect a new identity management vision to leapfrog passwords

Posted By Dana L Gardner, Monday, July 07, 2014

A stubborn speed bump continues to hobble the digital economy. We're referring to the outdated use of passwords and limited identity-management solutions that hamper getting all of our devices, cloud services, enterprise applications, and needed data to work together in anything approaching harmony. 

The past three years have seen a huge uptick in the number and types of mobile devices, online services, and media. Yet, we're seemingly stuck with 20-year-old authentication and identity-management mechanisms -- mostly based on passwords.

The resulting chasm between what we have and what we need for access control and governance spells ongoing security lapses, privacy worries, and a detrimental lack of interoperability among cross-domain cloud services. So, while a new generation of standards and technologies has emerged, a new vision is also required to move beyond the precarious passel of passwords that each of us seems to use all the time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

The fast approaching Cloud Identity Summit 2014 this July gives us a chance to recheck some identity-management premises -- and perhaps step beyond the conventional to a more functional mobile future. To help us define these new best ways to manage identities and access control in the cloud and mobile era, please join me in welcoming our guest, Andre Durand, CEO of Ping Identity. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: The Cloud Identity Summit is coming up, and at the same time, we're finding that this digital economy is not really reaching its potential. There seems to be this ongoing challenge, as we have more devices, varieties of service and this need for this cross-domain interaction capability. It’s almost as if we're stymied. So why is this problem so intractable? Why are we still dealing with passwords and outdated authentication?

Durand: Believe it or not, you have to go back 30 years to when the problem originated, when the Internet was actually born. Vint Cerf, one of the founders and creators of the Internet, was interviewed by a reporter two or three years back. He was asked if he could go back 30 years, when he was creating the Internet, what would he do differently? And he thought about it for a minute and said, "I would have tackled the identity problem."

Durand

He continued, "We never expected the Internet to become the Internet. We were simply trying to route packets between two trusted computers through a standardized networking protocol. We knew that the second we started networking computers, you needed to know who the user was that was making the request, but we also knew that it was a complicated problem." So, in essence, they punted.

Roll forward 30 years, and the bulk of the security industry and the challenges we now face in identity management at scale, Internet or cloud scale, all result from not having tackled identity 30 years ago. Every application, every device, every network that touches the Internet has to ask you who you are. The easiest way to do that is via user name and password, because there was no concept of who the user was on the network at a more fundamental universal layer.

So all this password proliferation comes as a result of the fact that identity is not infrastructure today in the Internet, and it's a hard problem to retrofit the Internet for a more universal notion of who you are, after 30 years of proliferating these identity silos. 

Internet of things

Gardner: It certainly seems like it’s time, because we're not only dealing with people and devices. We're now going into the Internet of Things, including sensors. We have multiple networks and more and more application programming interfaces (APIs) and software-as-a-service (SaaS) applications and services coming online. It seems like we have to move pretty quickly. [See more on identity standards and APIs.]

Durand: We do. The shift that began to exacerbate, or at least highlight, the underlying problem of identity started with cloud and SaaS adoption, somewhere around 2007-2008 time frame. With that, it moved some of the applications outside of the data center. Then, starting around 2010 or 2011, when we started to really get into the smartphone era, the user followed the smartphone off the corporate network and the corporate-issued computer and onto AT and T’s network.

So you have the application outside of the data center. You have the user off the network. The entire notion of how to protect users and data broke. It used to be that you put your user on your network with a company-issued computer accessing software in the data center. It was all behind the firewall.

Those two shifts changed where the assets were, the applications, data, and the user. The paradigm of security and how to manage the user and what they have access to also had to shift and it just brought to light the larger problem in identity.

What we need is the ability for your identity to follow your browser session, as you're moving between all these security domains.

Gardner: And the stakes here are fairly high. We're looking at a tremendously inefficient healthcare system here in the United States, for example. One of the ways that could be ameliorated and productivity could be increased is for more interactions across boundaries, more standards applied to how very sensitive data can be shared. If we can solve this problem, it seems to me there is really a flood of improvement in productivity to come behind it.

Durand: It's enormous and fundamental. Someone shared with me several years ago a simple concept that captures the essence of how much friction we have in the system today in and around identity and users in their browsers going places. The comment was simply this: In your browser you're no longer limited to one domain. You're moving between different applications, different websites, different companies, and different partners with every single click.

What we need is the ability for your identity to follow your browser session, as you're moving between all these security domains, and not have to re-authenticate yourself every single time you click and are off to a new part of the Internet.

We need that whether that means employees sitting at their desktop on a corporate network, opening their browser and going to Salesforce.com, Office 365, Gmail, or Box, or whether it means a partner going into another partner’s application, say to manage inventory as part of their supply chain.

We have to have an ability for the identity to follow the user, and fundamentally that represents this next-gen notion of identity.

Gardner: I want to go back to that next-gen identity definition in a moment, but I notice you didn't mention authenticate through biometrics to a phone or to a PC. You're talking, I think, at a higher abstraction, aren’t you? At software or even the services level for this identity. Or did I read it wrong?

Stronger authentication

Durand: No, you read it absolutely correctly. I was definitely speaking at 100,000 feet there. Part of the solution that I play out is what's coming in the future will be stronger authentication to fewer places, say stronger authentication to your corporate network or to your corporate identity. Then, it's a seamless ability to access all the corporate resources, no matter if they're business applications that are proprietary in the data center or whether or not the applications are in the cloud or even in the private cloud.

So, stronger user authentication is likely through the mobile phone, since the phones have become such a phenomenal platform for authentication. Then, once you authenticate to that phone, there will be a seamless ability to access everything, irrespective of where it resides.

Gardner: Then, when you elevate to that degree, it allows for more policy-driven and intelligence-driven automated and standardized approaches that more and more participants and processes can then adopt and implement. Is that correct?

Durand: That’s exactly correct. We had a notion of who was accessing what, the policy, governance, and the audit trail inside of the enterprise, and that was through the '80s, '90s, and the early 2000s. There was a lot of identity management infrastructure that was built to do exactly that within the enterprise.

We now live in this much more federated scenario, and there is a new generation of identity management that we have to install.

Gardner: With directories.

Durand: Right, directories and all the identity management, Web access management, identity-management provisioning software, and all the governance software that came after that. I refer to all of those systems as Identity and Access Management 1.0.

It was all designed to manage this, as long as all the applications, user, and data were behind the firewall on the company network. Then, the data and the users moved, and now even the business applications are moving outside the data center to the public and private cloud.

We now live in this much more federated scenario, and there is a new generation of identity management that we have to install to enable the security, auditability, and governance of that new highly distributed or federated scenario.

Gardner: Andre, let’s go back to that "next-generation level" of identity management. What did you mean by that? 

Durand:  There are few tenets that fall into the next-generation category. For me, businesses are no longer a silo. Businesses are today fundamentally federated. They're integrating with their supply chain. They're engaging with social identities, hitting their consumer and customer portals. They're integrating with their clients and allowing their clients to gain easier access to their systems. Their employees are going out to the cloud.

Fundamentally integrated

All of these are scenarios where the IT infrastructure in the business itself is fundamentally integrated with its customers, partners, and clients. So that would be the first tenet. They're no longer a silo.

The second thing is that in order to achieve the scale of security around identity management in this new world, we can no longer install proprietary identity and access management software. Every interface for how security and identity is managed in this federated world needs to be standardized.

So we need open identity standards such as SAML, OAuth, and OpenID Connect, in order to scale these use cases between companies. It’s not dissimilar to an era of email, before we had Internet e-mail and the SMTP standard.

Companies had email, but it was enterprise email. It wouldn’t communicate with other companies' proprietary email. Then, we standardized email through SMTP and instantly we had Internet-scaled email.

I predict that the same thing is occurring, and will occur, with identity. We'll standardize all of these cases to open identity standards and that will allow us to scale the identity use cases into this federated world.

So whatever infrastructure we develop needs to normalize the API and mobile access the same way that it does the web access.

The third tenet is that, for many years, we really focused on the browser and web infrastructure. But now, you have users on mobile devices and applications accessing APIs. You have as many, if not most, transactions occurring through the API mobile channel than you do through the web.

So whatever infrastructure we develop needs to normalize the API and mobile access the same way that it does the web access. You don’t want two infrastructures for those two different channels of communication. Those are some of the big tenets of this new world that define an architecture for next-gen identity that’s very different from everything that came before it.

Gardner: To your last tenet, how do we start to combine without gaps and without security issues the ability to exercise a federated authentication and identity management capability for the web activities, as well as for those specific APIs and specific mobile apps and platforms?

Durand: I’ll give you a Ping product specific example, but it’s for exactly that reason that we kind of chose the path that we did for this new product. We have a product called PingAccess, which is a next-gen access control product that provides both web access management for the web browsers and users using web application. It provides API access management when companies want to expose their APIs to developers for mobile applications and to other web services.

Prior to PingAccess in a single product, allowing you to enable policy for both the API channel and the web channel, those two realms typically were served by independent products. You'd buy one product to protect your APIs and you’d buy another product to do your web-access management.

Same product

Now with this next-gen product, PingAccess, you can do both with the same product. It’s based upon OAuth, an emerging standard for identity security for web services, and it’s based upon OpenID Connect, which is a new standard for single sign-on and authentication and authorization in the web tier. [See more on identity standards and APIs.]

We built the product to cross the chasm, between API and web, and also built it based upon open standards, so we could really scale the use cases.

Gardner: Whenever you bring out the words "new" and "standard," you'll get folks who might say, "Well, I'm going to stick with the tried and true." Is there any sense of the level of security, privacy control management, and governance control with these new approaches, as you describe them, that would rebut that instinct to stick with what you have?

Durand: As far as the instinct to stick with what you have, keep in mind that the alternative is proprietary, and there is nothing about proprietary that necessarily means you have better control or more privacy.

There's a tremendous amount of the work that goes into it by the entire industry to make sure that those standards are secure and privacy enabling.

The standards are really defining secure mechanisms to pursue a use case between two different entities. You want a common interface, a common language to communicate. There's a tremendous amount of the work that goes into it by the entire industry to make sure that those standards are secure and privacy enabling.

I'd argue that it's more secure and privacy enabling than the one-off proprietary systems and/or the homegrown systems that many companies developed in the absence of these open standards.

Gardner: Of course, with standards, it's often a larger community, where people can have feedback and inputs to have those standards evolve. That can be a very powerful force when it comes to making sure that things remain stable and safe. Any thoughts about the community approach to this and where these standards are being managed?

Durand: A number of the standards are being managed now by the Internet Engineering Task Force (IETF), and as you know, they're well-regarded, well-known, and certainly well-recognized for their community involvement and having a cycle of improvement that deals with threats, as they emerge, as the community sees them, as a mechanism to improve the standards over time to close those security issues.

Gardner: Going back to the Cloud Identity Summit 2014, is this a coming-out party of sorts for this vision of yours? How do you view the timing right now? Are we at a tipping point, and how important is it to get the word out properly and effectively?

Durand: This is our fifth annual Cloud Identity Summit. We've been working toward this combination of where identity and the cloud and mobile ultimately intersect. All of the trends that I described earlier today -- cloud adoption, mobile adoption, moving the application and the user and the device off the network -- is driving more and more awareness towards a new approach to identity management that is disruptive and fundamentally different than the traditional way of managing identity.

On the cusp

We're right on the cusp where the adoption across both cloud and mobile is irrefutable. Many companies now are moving all in in their strategies to make adoption by their enterprises across those two dimensions a cloud-first and mobile-first posture.

So it is at a tipping point. It's the last nail in the coffin for enterprises to get them to realize that they're now in a new landscape and need to reassess their strategies for identity, when the business applications, the ones that did not convert to SaaS, move to Amazon Web Services, Equinix, or to Rackspace and the private-cloud providers.

That, all of a sudden, would be the last shift where applications have left the data center and all of the old paradigms for managing identity will now need to be re-evaluated from the ground up. That’s just about to happen.

Gardner: Another part of this, of course, is the user themselves. If we can bring to the table doing away with passwords, that itself might encourage a lot of organic adoption and calls for this sort of a capability. Any sense of what we can do in terms of behavior at the user level and what would incentivize them to knock on the door of their developers or IT organization and ask for this sort of capability and vision that we described.

When you experience an ability to get to the cloud, authenticating to the corporation first, and simply swipe with your mobile phone, it just changes how we think about authentication and how we think about the utility of having a smartphone with us all the time. .

Durand: Now you're highlighting my kick-off speech at PingCon, which is Ping’s Customer and Partner Conference the day after the Cloud Identity Summit. We acquired a company and a technology last year in mobile authentication to make your mobile phone the second factor, strong authentication for corporations, effectively replacing the one-time tokens that have been issued by traditional vendors for strong authentication.

It’s an application you load on your smartphone and it enables you an ability to simply swipe across the screen to authenticate when requested. We'll be demonstrating the mobile phone as a second-factor authentication. What I mean there is that you would type in your username and password and then be asked to swipe the phone, just to verify your identity before getting into the company.

We'll also demonstrate how you can use the phone as a single-factor authentication. As an example, let’s say I want to go to some cloud service, Dropbox, Box, or Salesforce. Before that, I'm asked to authenticate to the company. I'd get a notification on my phone that simply says, "Swipe." I do the swipe, it already knows who I am, and it just takes me directly to the cloud. That user experience is phenomenal.

When you experience an ability to get to the cloud, authenticating to the corporation first, and simply swipe with your mobile phone, it just changes how we think about authentication and how we think about the utility of having a smartphone with us all the time.

Gardner: This aligns really well, and the timing is awesome for what both Google with Android and Apple with iOS are doing in terms of being able to move from screen to screen seamlessly. Is that something that’s built in this as well?

If I authenticate through my mobile phone, but then I end up working through a PC, a laptop, or any other number of interfaces, is this is something that carries through, so that I'm authenticated throughout my activity?

Entire vision

Durand: That's the entire vision of identity federation. Authenticate once, strongly to the network, and have an ability to go everywhere you want -- data center, private cloud, public SaaS applications, native mobile applications -- and never have to re-authenticate.

Gardner: Sounds good to me, Andre. I'm all for it.  Before we sign off, do we have an example? It's been an interesting vision and we've talked about the what and how, but is there a way to illustrate to show that when this works well perhaps in an enterprise, perhaps across boundaries, what do you get and how does it work in practice?

Durand: There are three primary use cases in our business for next-generation identity, and we break them up into workforce, partner, and customer identity use cases. I'll give you quick examples of all three.

In the workforce use case, what we see most is a desire for enterprises to enable single sign-on to the corporation, to the corporate network, or the corporate active directory, and then single-click access to all the applications, whether they're in the cloud or in the data center. It presents employees in the workforce with a nice menu of all their application options. They authenticate once to see that menu and then, when they click, they can go anywhere without having to re-authenticate.

That's the entire vision of identity federation. Authenticate once, strongly to the network.

That's primarily the workforce use case. It's an ability for IT to control what applications, where they're going in the cloud, what they can do in the cloud to have an audit trail of that, or have full control over the use of the employee accessing cloud applications. The next-gen solutions that we provide accommodate that use case.

The second use case is what we call a customer portal or a customer experience use case. This is a scenario where customers are hitting a customer portal. Many of the major banks in the US and even around the world use Ping to secure their customer website. When you log into your bank to do online banking, you're logging into the bank, but then, when you click on any number of the links, whether to order checks, to get check fulfillment, that goes out to Harland Clarke or to Wealth Management.

That goes to a separate application. That banking application is actually a collection of many applications, some run by partners, some by run by different divisions of the bank. The seamless customer experience, where the user never sees another login or registration screen, is all secured through Ping infrastructure. That’s the second use case.

The third use case is what we call a traditional supply chain or partner use case. The world's largest retailer is our customer. They have some 100,000 suppliers that access inventory applications to manage inventory at all the warehouses and distribution centers.

Prior to having Ping technology, they would have to maintain the username and password of the employees of all those 100,000 suppliers. With our technology they allow single sign-on to that application, so they no longer have to manage who is an employee of all of those suppliers. They've off-loaded the identity management back to the partner by enabling single sign-on.

About 50 of the Fortune 100 are all Ping customers. They include Best Buy, where you don’t have to login to go to the reward zone. You're actually going through Ping.

If you're a Comcast customer and you log into comcast.net and click on any one of the content links or email, that customer experience is secured though Ping. If you log into Marriott, you're going through Ping. The list goes on and on.

In the future

Gardner: This all comes to a head as we're approaching the July Cloud Identity Summit 2014 in Monterey, Calif., which should provide an excellent forum for keeping the transition from passwords to a federated, network-based intelligent capability on track.

Before we sign-off, any idea of where we would be in a year from now? Is this a stake in the ground for the future or something that we could extend our vision toward in terms of what might come next, if we make some strides and a lot of what we have been talking about today gets into a significant uptake and use.

Durand: We're right on the cusp of the smartphone becoming a platform for strong, multi-factor authentication. That adoption is going to be fairly quick. I expect that, and you're going to see enterprises adopting en masse stronger authentication using the smartphone.

Gardner: I suppose that is an accelerant to the bring-your-own-device (BYOD) trend. Is that how you see it as well?

We're right on the cusp of the smartphone becoming a platform for strong, multi-factor authentication.

Durand: It’s a little bit orthogonal to BYOD. The fact that corporations have to deal with that phenomenon brings its own IT headaches, but also its own opportunities in terms of the reality of where people want to get work done.

But the fact that we can assume that all of the devices out there now are essentially smartphone platforms, very powerful computers with lots of capabilities, is going to allow the enterprises now to leverage that device for really strong multi-factor authentication to know who the user is that’s making that request, irrespective of where they are -- if they're on the network, off the network, on a company-issued computer or on their BYOD.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.

You may also be interested in:

Tags:  Andre Durand  API  BriefingsDirect  Cloud Identity Summit  Dana Gardner  Identity management  Interarbor Solutions  OAuth  OpenID Connect  Ping Identity  Single sign-on 

Share |
PermalinkComments (0)
 

Standards and APIs: How to best manage identity and security in the mobile era

Posted By Dana L Gardner, Wednesday, July 02, 2014

The advent of the application programming interface (API) economy has forced a huge, pressing need for organizations to both seek openness and improve security for accessing mobile applications, data, and services anytime, anywhere, and from any device.

Awash in inadequate passwords and battling subsequent security breaches, business and end-users alike are calling for improved identity management and federation technologies. They want workable standards to better chart the waters of identity management and federation, while preserving the need for enterprise-caliber risk remediation and security.

Meanwhile, the mobile tier is becoming an integration point for scads of cloud services and APIs, yet unauthorized access to data remains common. Mobile applications are not yet fully secure, and identity control that meets audit requirements is hard to come by. And so developers are scrambling to find the platforms and tools to help them manage identity and security, too.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.

Clearly, the game has clearly changed for creating new and attractive mobile processes, yet the same old requirements remain wanting around security, management, interoperability, and openness.

BriefingsDirect assembled a panel of experts to explore how to fix these pressing needs: Bradford Stephens, the Developer and Platforms Evangelist in the CTO’s Office at Ping Identity; Ross Garrett, Senior Director of Product Marketing at Axway, Kelly Grizzle, Principal Software Engineer at SailPoint Technologies. Welcome, Kelly. The sponsored panel discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We are approaching the Cloud Identity Summit 2014 (CIS), which is coming up on July 19 in Monterey, Calif. There's a lot of frustration with identity services that meet the needs of developers and enterprise operators as well. So let’s talk a little bit about what’s going on with APIs and identity.

What are the trends in the market that keep this problem pressing? Why is it so difficult to solve?

Interaction changes

Stephens: Well, as soon as we've settled on a standard, the way we interact with computers changes. It wasn’t that long ago that if you had Active Directory and SAML and you hand-wrote security endpoints of model security products, you were pretty much covered.

Stephens

But in the last three or four years, we've gone to a world where mobile is more important than web. Distributed systems are more important than big iron. And we communicate with APIs instead of channels and SDKs, and that requires a whole new way of thinking about the problem.

Garrett: Ultimately, APIs are becoming the communication framework, the fabric, in which all of the products that we touch today talk to each other. That, by extension, provides a new identity challenge. That’s a lot of reason why we've seen some friction and schizophrenia around the types of identity technologies that are available to us.

So we see waves of different technologies come and go, depending on what is the flavor of the month. That has caused some frustration for developers, and will definitely come up during our Cloud Identity Summit in a couple of weeks.

Grizzle: APIs are becoming exponentially more important in the identity world now. As Bradford alluded to, the landscape is changing. There are mobile devices as well as software-as-a-service (SaaS) providers out there who are popping up new services all the time. The common thread between all of them is the need to be able to manage identities. They need to be able to manage the security within their system. It makes total sense to have a common way to do this.

Grizzle

APIs are key for all the different devices and ways that we connect to these service providers. Becoming standards based is extremely important, just to be able to keep up with the adoption of all these new service providers coming on board.

Gardner: As we describe this as the API economy, I suppose it’s just as much a marketplace and therefore, as we have seen in other markets, people strive for predominance. There's jockeying going on. Bradford, is this a matter of an architectural shift? Is this a matter of standards? Or is this a matter of de-facto standards? Or perhaps all of the above?

Stephens: It’s getting complex quickly. I think we're settling on standards, like it or not, mostly positively. I see most people settling on at least OAuth 2.0 as a standard token, and OpenID Connect for implementation and authentication of information, but I think that’s about as far as we get.

There's a lot of struggle with established vendors vying to implement these protocols. They try to bridge the gap between the old world of say SAML and Active Directory and all that, and the new world of SCIM, OAuth, OpenID Connect. The standards are pretty settled, at least for the next two years, but the tools, how we implement them, and how much work it takes developers to implement them, are going to change a lot, and hopefully for the better.

Evolving standards

Garrett: We have identified a number of new standards that are bridging this new world of API-oriented connectivity. Learning from the past of SAML and legacy, single sign-on infrastructure, we definitely need some good technology choices.

Garrett

The standards seem to be leading the way. But by the same token, we should keep a close eye on the market changing with regards to how fast standards are changing. We've all seen things like OAuth progress slower than some of the implementations out there. This means the ratification of the standard was happening after many providers had actually implemented it. It's the same for OpenID Connect.

We are in line there, but the actual standardization process doesn’t always keep up with where the market wants to be.

Gardner: We've been this play out before that the standards can lag. Getting consensus, developing the documentation and details, and getting committees to sign off can take time, and markets move at their own velocity. Many times in the past, organizations have hedged their bets by adopting multiple standards or tracking multiple ways of doing things, which requires federation and integration.

Kelly, are there big tradeoffs with standards and APIs? How do we mitigate the risk and protect ourselves by both adhering to standards, but also being agile in the market?

Grizzle: That’s kind of tricky. You're right in that standards tend to lag. That’s just part and parcel of the standardization process. It’s like trying to pass a bill through Congress. It can go slow.

You're right in that standards tend to lag. That’s just part and parcel of the standardization process.

Something that we've seen some of these standards do right, from OAuth and from the SCIM perspective, is that both of those have started their early work with a very loose standardization process, going through not one of the big standards bodies, but something that can be a little bit more nimble. That’s how the SCIM 1.0 and 1.1 specs came out, and they came out in a reasonable time frame to get people moving on it.

Now that things have moved to the Internet Engineering Task Force (IETF), development has slowed down a little bit, but people have something to work with and are able to keep up with the changes going on there.

I don’t know that people necessarily need to adopt multiple standards to hedge their bets, but by taking what’s already there and keeping a pulse on the things that are going to change, as well as the standard being forward-thinking enough to allow some extensibility within it, service providers and clients, in the long run, are going to be in a pretty good spot.

Quick primer

Gardner: We've talked a few technical terms so far, and just for the benefit of our audience, I'd like to do a quick primer, perhaps with you Bradford. To start: OAuth, this is with the IETF now. Could you just quickly tell the audience what OAuth is, what it’s doing, and why it’s important when we talk about API, security and mobile?

Stephens: OAuth is the foundation protocol for authorization when it comes to APIs for web applications. OAuth 2 is much more flexible than OAuth 1.

Basically, it allows applications to ask for access to stuff. It seems very vague, but it’s really powerful once you start getting the right tokens for your workflows. And it provides the same foundation for everything else we do for identity and APIs.

The best example I can think of is when you log into Facebook, and Facebook asks whether you really want this app to see your birthday, all your friends’ information, and everything else. Being able to communicate all that over the OAuth 2.0 is a lot easier than how it was with OAuth 1.0 a few years ago.

Gardner: How about OpenID Connect. This is with the OpenID Foundation. How does that relate, and what is it?

If OAuth actually is the medium, OpenID Connect can be described as the content of the message. It’s not the message itself.

Stephens: If OAuth actually is the medium, OpenID Connect can be described as the content of the message. It’s not the message itself. That’s usually done with the Token, Javascript object notation (JSON) Web Token, but OpenID Connect provides the actual identity information.

When you access an API and you authenticate, you choose a scope, and one of the most common scopes is OpenID Profile. This OpenID Profile will just have things like your username, maybe your address, various other pieces of identity information, and it describes who the "you" is, who you are.

Gardner: And SCIM, you mentioned that Kelly, and I know you have been involved with it. So why don’t you take the primer for SCIM, and I believe it’s Simple Cloud Identity Management?

Grizzle: That's the historical name for it, Simple Cloud Identity Management. When we took the standard to the IETF, we realized that the problems that we were solving were a little bit broader than just the cloud and within the cloud. So the acronym now stands for the System for Cross-domain Identity Management.

That’s kind of a mouthful, but the concept is pretty simple. SCIM is really just an API and a schema that allows you to manage identities and identity-related information. And by manage them, I mean to create identities in systems to delete them, update them, change the entitlements and the group memberships, and things like that.

Gardner: From your perspective, Kelly, what is the relationship then between OAuth and SCIM?

Managing identities

Grizzle: OAuth, as Bradford mentioned, is primarily geared toward authorization, and answers the question, "Can Bob access this top-secret document?" SCIM is really not in the authorization and authentication business at all. SCIM is about managing identities.

OAuth assumes that an identity is already present. SCIM is able to create that identity. You can create the user "Bob." You can say that Bob should not have access to that top-secret document. Then, if you catch Bob doing some illicit activity, you can quickly disable his account through a SCIM call. So they fit together very nicely.

Gardner: In the real world, developers like to be able to use APIs, but they might not be familiar with all the details that we've just gone through on some of these standards and security approaches.

How do we make this palatable to developers? How do we make this something that they can implement without necessarily getting into the nitty-gritty? Are there some approaches to making this a bit easier to consume as a developer?

The best thing we can do is have tool-providers give them tools in their native language or in the way developers work with things.

Stephens: As a developer who's relatively new to this field -- I worked in database for three years -- I've had personal experience of how hard it is to wrap your head around all the standards and all these flows and stuff. The best thing we can do is have tool providers give them tools in their native language, or in the way developers work with things.

This needs well-documented, interactive APIs -- things like Swagger -- and lots of real-world code examples. Once you've actually done the process of authentication through OAuth, getting a JSON Web Token, and getting an OpenID Connect profile, it’s really  simple to see how it all works together, if you do it all through a SaaS platform that handles all the nitty-gritty, like user creation and all that.

If you have to roll your own, though, there's not a lot of information out there besides the WhitePages and Wall Post. It’s just a nightmare. I tried to roll my own. You should never roll your own.

So having SaaS platforms to do all this stuff, instead of having documents, means that developers can focus on providing their applications, and just understand that I have this media and project, not about which tokens carry information that accesses OAuth and OpenID Connect.

I don’t really care how it all works together; I just know that I have this token and it has the information I need. And it’s really liberating, once you finally get there.

So I guess the best thing we can do is provide really great tools that solve the identity-management problems.

Tools: a key point

Garrett: Tools, that’s the key point here. Whether we like it or not, developers tend to be kind of lazy sometimes and they certainly don’t have the time or the energy to understand every facet of the OAuth specification. So providing tools that can wrap that up and make it as easy to implement as possible is really the only way that we get to really secure mobile applications or any API interaction. Because without a deep understanding of how this stuff works, you can make pretty fundamental errors.

Having said that, at least we've started to take steps in the right direction with the standards. OAuth is built at least with the idea of mobile access in mind. It’s leveraging REST and JSON types, rather than SOAP and XML types, which are really way too heavyweight for mobile applications.

So the standards, in their own right, have taken us in the right direction, but we absolutely need tools to make it easy for developers.

Grizzle: Tools are of the utmost importance, and some of the identity providers and people with skin in the game, so to speak, are helping to create these tools and to open-source them, so that they can be used by other people.

Identity isn’t the most glamorous thing to talk about, except when it all goes wrong, and some huge leak makes the news headlines.

Another thing that Ross touched on was keeping the simplicity in the spec. These things that we're addressing -- authorization, authentication, and managing identities -- are not extremely simple concepts always. So in the standards that are being created, finding the right balance of complexity versus completeness and flexibility is a tough line to walk.

With SCIM, as you said, the first initial of the acronym used to stand for Simple. It’s still a guiding principle that we use to try to keep these interactions as simple as possible. SCIM uses REST and JSON, just like some of these other standards. Developers are familiar with that. Putting the burden on the right parties for implementation is very important, too. To make it easy on clients, the ones who are going to be implementing these a lot, is pretty important.

Gardner: Do these standards do more than help the API economy settle out and mature? Cloud providers or SaaS providers want to provide APIs and they want the mobile apps to consume them. By the same token, the enterprises want to share data and want data to get out to those mobile tiers. So is there a data-management or brokering benefit that goes along with this? Are we killing multiple birds with one set of standards?

Garrett: The real issue here, when we think about the new types of products and services that the API economy is helping us deliver, is around privacy and ultimately customer confidence. Putting the user in control of who gets to access which parts of my identity profile, or how contextual information about me can perhaps make identity decisions easier, allows us to lock down, or better understand, these privacy concerns that the world has.

Identity isn’t the most glamorous thing to talk about -- except when it all goes wrong -- and some huge leak makes the news headlines, or some other security breach has lost credit-card numbers or people’s usernames and passwords.

Hand in hand

In terms of how identity services are developing the API economy, the two things go hand in hand. Unless people are absolutely certain about how their information is being used, they simply choose not to use these services. That’s where all the work that the API management vendors and the identity management vendors are doing and bringing that together is so important.

Gardner: You mentioned that identity might not be sexy or top of mind, but how else can you manage all these variables on an automated or policy-driven basis? When we move to the mobile tier, we're dealing with multiple networks. We're dealing with multiple services ... cloud, SaaS, and APIs. And then we're linking this back to enterprise applications. How other than identity can this possibly be managed?

Stephens: Identity is often thought of as usernames and passwords, but it’s evolving really quickly to be so much more. This is something I harp on a lot, but it’s really quickly becoming that who we are online is more important than who we are in real life. How I identify myself online is more important than the driver's license I carry in my wallet.

And it’s important that developers understand that because any connected application is going to have to have a deep sense of identity.

As you know, your driver’s license is like a real-life token of information that describes what you're allowed to do in your life. That’s part of your identity. Anybody who has lost their license knows that, without that, there's not a whole lot you can do.

Bringing that analogy back to the Internet, what you're able to access and what access you're able to give other people or other applications to change important things, like your Facebook posts, your tweets, or go through your email and help categorize that is important. All these little tasks that help define how you live, are all part of your identity. And it’s important that developers understand that because any connected application is going to have to have a deep sense of identity.

Gardner: Let me pose the same question, but in a different way. When you do this well, when you can manage identity, when you can take advantage of these new standards that extend into mobile requirements and architectures, with the API economy in mind, what do you get? What does it endow you with? What can you do that perhaps you couldn’t do if you were stuck in some older architectures or thinking?

Grizzle: Identity is key to everything we do. Like Bradford was just saying, the things that you do online are built on the trust that you have with who is doing them. There are very few services out there where you want completely anonymous access. Almost every service that you use is tied to an identity.

So it’s of paramount importance to get a common language between these. If we don’t move to standards here, it's just going to be a major cost problem, because there are a ton of different providers and clients out there.

If every provider tries to roll their own identity infrastructure, without relying on standards, then, as a client, if I need to talk to two different identity providers, I need to write to two different APIs. It’s just an explosive problem, with the amount that everything is connected these days.

So it’s key. I can’t see how the system will stand up and move forward efficiently without these common pieces in place.

Use cases

Gardner: Do we have any examples along these same lines of what do you get when you do this well and appropriately based on what you all think is the right approach and direction? We've been talking at a fairly abstract level, but it really helps solidify people’s thinking and understanding when they can look at a use-case, a named situation or an application.

Stephens: If you want a good example of how OAuth delegation works, building a Facebook app or just working on Facebook app documentation is pretty straightforward. It gives you a good idea of what it means to delegate certain authorization.

Likewise, Google is very good. It’s very integrated with OAuth and OpenID Connect when it comes to building things on Google App Engine.

The thing that these new identity service providers have been offering has, behind the scenes, been making your lives more secure.

So if you want to secure an API that you built using Google Cloud on Google App Engine, which is trivial to do, Google Cloud Endpoints provides a really good example. In fact, there is a button that you can hit in their example button called Use OAuth and that OAuth transports OpenID Connect profile, and that’s a pretty easy way to go about it.

Garrett: I'll just take a simple consumer example, and we've touched on this already. It was the idea in the past, where every individual service or product is offering only their identity solution. So I have to create a new identity profile for every product or service that I'm using. This has been the case for a long time in the consumer web and in the enterprise setting as well.

So we have to be able to solve that problem and offer a way to reuse existing identities. It involves so taking technologies like OpenID Connect, which is totally hidden to the end user really, but simply saying that you can use this existing identity, your LinkedIn or  Facebook credentials, etc., to access some new products, takes a lot of burden away from the consumer. Ultimately, that provides us a better security model end to end.

The thing that these new identity service providers have been offering has, behind the scenes, been making your lives more secure. Even though some people might shy away from using their Facebook identity across multiple applications, in many ways it’s actually better to, because that’s really one centralized place where I can actually see, audit, and adjust the way that I'm presenting my identity to other people.

That’s a really great example of how these new technologies are changing the way we interact with products everyday.

Standardized approach

Grizzle: At SailPoint, the company that I work for, we have a client, a large chip maker, who has seen the identity problem and really been bitten by it within their enterprise. They have somewhere around 3,500 systems that have to be able to talk to each other, exchange identity data, and things like that.

The issue is that every time they acquire a new company or bring a new group into the fold, that company has its own set of systems that speak their own language, and it takes forever to get them integrated into their IT organization there.

So they've said that they're not going to support every app that these people bring into the IT infrastructure. They're going with SCIM and they are saying that all these apps that come in, if they speak SCIM, then they'll take ownership of those and pull them into their environment. It should just plug in nice and easy. They're doing it just because of a resourcing perspective. They can't keep up with the amount of change to their IT infrastructure and keep everything automated.

They can't keep up with the amount of change to their IT infrastructure and keep everything automated.

Gardner: I want to quickly look at the Cloud Identity Summit that’s coming up. It sounds like a lot of these issues are going to be top of mind there. We're going to hear a lot of back and forth and progress made.

Does this strike you, Bradford, as a tipping point of some sort, that this event will really start to solidify thinking and get people motivated? How do you view the impact of this summit on cloud identity?

Stephens: At CIS, we're going to see a lot of talk about real-world implementation of these standards. In fact, I'm running the Enterprise API track and I'll be giving a talk on end-to-end authentication using JAuth, OAuth, and OpenID Connect. This year is the year that we show that it's possible. Next year, we'll be hearing a lot more about people using it in production.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.

You may also be interested in:

Tags:  API  Bradford Stephens  BriefingsDirect  Cloud Identity Summit  Dana Gardner  Identity management  Interarbor Solutions  OAuth  OpenID Connect  Ping Identity  Single sign-on 

Share |
PermalinkComments (0)
 
Page 1 of 54
1  |  2  |  3  |  4  |  5  |  6  >   >>   >| 
Page Title
Association Management Software Powered by YourMembership.com®  ::  Legal/Privacy