Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Dana Gardner's BriefingsDirect for Connect.
Blog Home All Blogs
Longtime IT industry analyst Dana Gardner is a creative thought leader on enterprise software, SOA, cloud-based strategies, and IT architecture strategies. He is a prolific blogger, podcaster and Twitterer. Follow him at http://twitter.com/Dana_Gardner.

 

Search all posts for:   

 

Top tags: Dana Gardner  Interarbor Solutions  BriefingsDirect  HP  cloud computing  The Open Group  big data  SaaS  virtualization  VMWare  data center  enterprise architecture  Ariba  data analytics  HP DISCOVER  HPDiscover  SOA  Open Group Conference  Ariba Network  security  HP Vertica  SAP  Tony Baer  desktop virtualization  Jennifer LeClaire  VMWorld  Business Intelligence  IT  mobile computing  Ariba LIVE 

Creative Solutions in Healthcare improves client services and saves money using VMware vCloud Air hybrid cloud

Posted By Dana L Gardner, Wednesday, December 10, 2014

The next BriefingsDirect innovator case study interview explores how a Texas healthcare provider has adopted cloud computing, and in doing so has both saved money and improved the quality of its many services.

Listen to the podcast. Find it on iTunes. Download the transcript.

 At the recent VMworld 2014 Conference in San Francisco, our moderator, Dana Gardner, Principal Analyst at Interarbor Solutions, interviewed Shawn Wiora, CIO at Creative Solutions in Healthcare in Fort Worth, Texas to learn more about the process of adopting cloud, and why cloud has benefited them as a complex organization.

Here are some excerpts:

Wiora: Creative Solutions in HealthCare is the largest independent owner and operator of skilled nursing facilities (SNFs), which are nursing homes, in the State of Texas. We also operate assisted-living facilities and we provide long-term care solutions, primarily in Texas.

Wiora

We have about 6,000 employees. Many of them are nurses, and many of them are capturing data about our patients and our residents. Our residents are in the thousands and, as a private company, we're able to deliver solutions in the marketplace that are really geared toward lifestyle, care, nutrition, activities, and programs. That's why the company has been so successful -- we have this passionate care about our residents.

Gardner: Of course, healthcare is really changing in terms of how it's using IT and leveraging IT, and I suppose you're no different.

Wiora: That's exactly right. HealthCare has been ramping up in terms of IT, not only catching up with the industry, but in some cases, leading the forefront, especially when it comes to patient care and delivering innovative diagnosis and treatment programs over telemedicine and other types of electronic media.

Gardner:  Why has cloud computing been appealing to you with your requirements? What challenges were you trying to solve when you looked at the cloud model?

Going virtual

Wiora: It's an interesting story. About two years ago, the company was 100 percent physical in terms of its server infrastructure. Similar to many other long-term care facilities, we have to deliver new forms of compliance as it relates to HIPAA, the HITECH Act, and the NIST framework.

So if you take all those, in addition to the new apps that are being required of the organization, new types of health exchanges that we are involved with, the requirements were just escalating dramatically. So we started with a physical infrastructure and we looked at going virtual.

It was a wholesale change ramp-up. We took a big challenge by embarking on an initiative that allowed the company to go from physical to virtual, and at the same time, we went from premise-based to the cloud. We did that together.

Fortunately, we already had some really good experience with virtualization, but by no means did we have a program that was deploying across the server infrastructure. So we issued an RFP and we selected a group of vendors at the top of the pyramid. At that top was Azure, AWS, and VMware’s vCloud. We chose Microsoft Azure.

The team at the VMware understood what we were doing in terms of our timeline, our projects, and our applications that we are looking to move to the cloud.

We started a pilot with Azure, and it was really interesting. We're a Microsoft house, and the team chose Azure based on the fact that not only we were Microsoft house, but we had a number of initiatives that we wanted to move to the cloud, including Microsoft Exchange.

So, we started moving Exchange into the cloud with our Azure program. Then, we asked Microsoft to issue a document that indicated that they would support Exchange, their own software, in Azure, their own cloud, and guess what happened?

We did not get acknowledgment. Ultimately, they would not indicate that they would support their own software in their own cloud. We were flabbergasted. We just couldn't believe it.

We ended up pulling the plug on that project, on that initiative. We went back to the marketplace and we chose vCloud Air, and we quickly ramped up. That's the reason why this project has been so successful -- the ramp-up.

The team at the VMware understood what we were doing in terms of our timeline, our projects, and our applications that we are looking to move to the cloud. That's really where they differentiated, not only between Azure and AWS in terms of their on-boarding, because we did pilots on all those cloud infrastructures. VMware’s vCloud Air team had the best on-boarding process for any kind of IT project that I've been involved with in the past 20 years.

Had our back

It just really made the IT team at Creative Solutions in Healthcare, the company, just feel like those guys really had our back. They really cared about what was happening. They knew that we were under the gun, because we had done this Azure kind of cluster, and it was not even feasible for us to go down the path with our own infrastructure. It ended up being a great partnership.

Gardner: Shawn, tell me to what degree you're hybrid? Do you have an on-premises cloud virtualized set of applications? Do you have another set of applications? You've have opted to go into the public cloud, the vCloud Air. Is this something that you're still sorting out in terms of what goes where? How about the data? Is that also on-prem, and how you are factoring the hybrid approach?

Wiora: We're very deliberate with our cloud strategy. We started with a pilot of some core applications, got our feet wet in the cloud, and then we took that success that we had. Again, the on-boarding that we received in that process was really second to none.

That made the team feel very comfortable with moving other infrastructures. Now, we've moved our entire back-office infrastructure, our accounting, a number of custom apps, provisioning, and supply chain into the cloud with the vCloud Air.

That's what IT should be focused on: how do we ultimately deliver solutions that the other business units, and ultimately our patients, can appreciate.

We're are also in a hybrid environment, as you've indicated. We have servers throughout our facilities and servers at headquarters. We have other software-as-a-service (SaaS) models that we're interacting with. We're moving data from other providers back into our on-premise environment and then we're moving that into vCloud Air. There's a lot of hybrid going on right now.

Gardner: So that integration, management, and orchestration, being able to automate that, seems very important to you. You want to be able to set this up, have it run, and then devote your energy to all these new projects.

Wiora: Yes. That's really where the return is to the company, the shareholders, the board, and the management team. That's what IT should be focused on: How do we ultimately deliver solutions that the other business units, and ultimately our patients, can appreciate.

We're in the long-term care industry and we've been very successful in growing the company based on the passionate, caring model. The IT organization aligns its passion and care toward the patients.

Instead of being wrapped up with servers, virtualization, and all of the other things that VMware is the best at doing, we're outward-focused on the business units and the patients.

New product appeal

Who has more data than healthcare? There are some organizations that have a lot of data, but we track what our patients eat, what time they go to sleep, what they do during the day in terms of activity. We're talking each and every day across each and every facility, thousands of patients.

It is been game changing for the company. It is been game changing for our patients.

So VMware's Object Based storage is something that is in our future.

Gardner: So, one last area for adoption. You have talked about the on-boarding process, but there's also the end-user absorption of new approaches from IT. How this has gone in terms of your end users?

Have they noticed a change in the type of applications? Has it been something that they didn't notice? What's been that result at that end-user inception point when you made this transition to cloud?

Wiora: It is been game changing for the company. It is been game changing for our patients. Instead of being fearful about approaching IT, the business units are coming to IT, and they know that we can ramp up applications very quickly.

We just ramped up our maintenance application in a couple of days. In the past, that would have taken months of planning. The business unit laughed. They just looked at IT and said, "You have to be kidding. This is up and running already?"

Advice for others

Gardner: That's a strong testament. How about advice for other organizations that are beginning that RFP process, that are thinking about cloud, looking at the different approaches, the different providers? Any words of wisdom in hindsight that you could offer now that you have been through that process?

Wiora: Absolutely. Who wants to reinvent the wheel? If I'm looking at going to the cloud for the first time or if I am looking at enhancing my hybrid cloud environment, I would suggest you look at TCO.

Look at what your labor costs are. Look at who the A-Team is in the industry for virtualizing. Look at what the roadmaps are and look at which vendors really don't care what you put in your cloud infrastructure. There are vendors. as we talked about earlier, that really have the ability to approve or disapprove what you put in there.

I'd look at that, but you have to look at TCO and look at partnering with an organization that can help you easily ramp up. Then, I think you look at how you want to run your IT organization. If those things make sense to you, then I would suggest you look at vCloud.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

You may also be interested in:

Tags:  BriefingsDirect  cloud computing  CSHC  Dana Gardner  hybrid cloud  Interarbor Solutions  Shawn Wiora  vCloud AIr  virtualization  VMWare 

Share |
PermalinkComments (0)
 

HP launches Haven OnDemand to deliver big data services suite in the cloud

Posted By Dana L Gardner, Wednesday, December 03, 2014

The next BriefingsDirect big data news analysis discussion examines some major announcements made at the HP Discover conference this week, the debut of HP Haven OnDemand, a new set of analytics-in-the-cloud services.

Our panel of users and experts unpacks the details from Barcelona, and explores the implications of the delivery of cloud-based HP Vertica OnDemand and HP IDOL OnDemand components within the HP Haven OnDemand suite.

Listen to the podcast. Find it on iTunes. Download the transcript.

To learn more about how big data changes everything via these new HP cloud offerings, we're joined by Fernando Lucini, Chief Technology Officer for HP Big Data; Howard Brown, Founder and CEO of RingDNA, based in Los Angeles, and Neal Holley, Operations Director at GateWest New Media Ltd., based in Bristol, UK. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Fernando, we've heard quite a bit of news the last few days at the HP Discover 2014 Conference in Barcelona, and HP Software General Manager Robert Youngjohns delivered the details Tuesday about HP OnDemand. Let's look at this from the big picture. Why are data and analytics, combined with the cloud-hosting model and delivery model, such a good fit? Why is this an important milestone for the cloud?

Lucini: It's exciting in a number of ways. If you think about what we've launched, we recognized early that our customers, our partners, and developers out there were going to consume technologies in a new way. This is something that the industry all agreed on. We were just early birds in this and we recognized that it's all going to be about on-demand consumption, self-service, speed, elasticity, and all those nice things.

Lucini

So in some respects, the industry wants to consume things in this fashion. We recognize it, and then the next step for us is to think about the people and what they're going to do with these kinds of services.

You can think about it in two different ways. You have the people out there in the real world who are creating applications on top of very rich information, and that's the mobile apps that we all use. It's the applications to look at both human information, as well as business information, or very structured information, creating applications that do that. We have that persona and we really wanted to make sure that that developer had all the right tools in that model on-demand, self-service.

Learn about big data analytics in the cloud
HP Vertica OnDemand
Request a 30-day free trial

The other part of the equation is the world of the data warehouse, where we have very large amounts of information. We're traditionally applying analysis, but in this new generation, we need the tools that can do this at a bigger scale, can do it quicker, and can be more flexible. This is our Vertica technology and the same kind of on-demand, self-service needs are out there. So the second part of our answer to the question for industry is that we'll provide you an on-demand way to serve that particular purpose.

Brown

The announcement comes from a number of good reasons. It provides the market with an answer to both of these peoples' needs. It does so in an incredibly elastic fashion and it does it with incredible richness. It has quite a unique degree of depth and variety.

If you look at the IDOL OnDemand functionality, there are new APIs that you can explore and use with the freemium model.

If you look at the Vertica OnDemand space, it allows you to manage whatever size warehouse you need in an incredibly elastic and transparent way, but still on-demand.

There’s so much to tell. It’s such an exciting time for the industry, and being in HP, leading the charge, is pretty, pretty impressive and important.

Great importance

Gardner: Clearly, this isn't news just for one part of an IT organization. This seems to have a great importance for data scientists, IT operators, developers, even line of business users of business intelligence (BI).

Holley

So let's look at this a little bit from the perspective of the IT operator. This is something that's a cost issue in many respects and broadens the use of something like IDOL and Vertica to a much larger market. With it being in the cloud, you don't need to set up your data center and you don’t need to have those capital expenditures.

Let’s start at the top, where we're talking about this as a cloud model. Why does this broaden the market for data and analytics?

Lucini: Go back to this IT operator. This guy or gal has always wanted to provide their business with the tools. There was an element there where these guys want to provide the analysis capabilities, they want to have the ingestion and the features, but it’s a tough thing, as you very well put it. There is capital expenditure, maintenance, and training.

As the differentiator here, the move is that the acceleration is going to be immediate. Let’s use simple examples, I want to be able to take video and do face recognition, extract license plates, extract behaviors, or listen to voice and do something, I want to do that and I don’t want the burden of all the science that goes behind doing these things.

IT operators are going to be incredibly happy that they can provide the business with what the business needs at a lower cost and get outcomes quicker.

This IT operator is going to say, "No problem. Here’s the link. You pay this as you go. Enjoy." And that's as complex as it gets. So the acceleration is going to be immediate, which translates almost immediately to create more and more applications and doing more and more analysis, which is what we all want, at a lower cost point in shorter times.

IT operators are going to be incredibly happy that they can provide the business with what the business needs at a lower cost and get outcomes quicker.

Gardner: This should be of interest to large enterprises that might want to augment their current warehouse approach and strategy. It also sounds like for those organizations that may have been too small or didn’t have the budget to set up their own on-premises data warehouse, they now have an opportunity to walk right into a deep, powerful analytics capability.

Lucini: It democratizes the whole idea of analytics. You want to make it as democratic as possible. Size isn't necessarily important with regards to intelligence, interest, having something to say, or having something to analyze. It’s all about making it democratic, and the cloud really helps in that.

It's also about giving functionality that wasn't accessible to some of these guys. We're talking about very advanced analysis -- technologies for video, voice, or text analysis, let alone warehousing. It’s now available to everybody. They can go in there, test it out, play with it, see how valuable it is to them, and stop dreaming about the value, but make the value. Then, if that’s what they need, they can just start paying as they go and getting on with their lives.

General availability

Gardner: Let’s dig into a little of the details. HP announced Haven OnDemand on December 2, with general availability coming in Q1 2015, so pretty rapidly. Vertica, that’s the one that's coming up first and then IDOL OnDemand is currently available as a freemium model, as you mentioned, on an early access basis, but will be generally available in a few months later into 2015.

What else should we know about the pricing here? Why is this compelling not only as an OPEX versus a CAPEX, but with pricing that is very compelling and attractive.

Lucini: Indeed. In some respects, because you're removing the necessity to open the hardware and to scale it up, we're also providing economies of scale in what we're doing. In HP Cloud Services, we have an amazing cloud that we can go to elastically, and everybody gets advantage of this.

If you think about it, ultimately in one of these models, you get a lot of people come in, have a look, play, investigate, understand, and learn. Then, you get a smaller percentage that actually commit, do the greater applications, and run their warehouses.

You should be in a position where you understand exactly what you're using and what you are paying for it, and it should allow you to toggle back and forth on that need. It’s pretty cool.

It balances out and it allows us to have a lower price point. It also allows us to charge as we go. It allows us a pay-as-you-go model. It all works out. Over time, we'll understand more and more what people want. This is being done in a very collaborative fashion, listening to the market for on-demand.

In the very beginning, we have been very Net Promoter Score focused. I challenge anybody to get yourself a login, and you'll see the Net Promoter kick in.

All the analysis is very much linked to what you want to do, what’s important for you, what’s being used most, and what gives us the most economies. That drives us to be more competitive.

It’s very transparent. It’s very clean. You should be in a position where you understand exactly what you're using and what you are paying for it, and it should allow you to toggle back and forth on that need. It’s pretty cool.

Gardner: As for the actual cloud that this is running on, is there a choice with that or is this starting out on HP Helion Cloud, the HP public cloud. What's the roadmap for the public-cloud infrastructure that this operates on? 

Lucini: At the moment, this is running in HP Cloud Services, which is Helion based of course. It is all designed on top of Helion. So the roadmap for it in the next few courses will be that it will be deployed in any Helion implementation. As long as you have Helion, you can deploy the services underneath.

Of course, Helion is a flavor of OpenStack. So you have the ability to use this in other flavors of OpenStack, but we're principally focused on Helion. We're principally focused on the Public HP Cloud Services and the private Helion implementations with our colleagues from Enterprise Services.

No difference

In some respect in the next year it should be a choice for you to go public cloud for what you need to do. If you're a developer and you just want to create your own app, the private-versus-public doesn’t make a difference to you.

Corporate may want to use this inside a firewall. As you know, in HP we have some of the largest corporates out there. If you're one of these guys and have the need to have that privacy you can install Helion and run these services of top of Helion. Following the HP philosophy, it’s a matter of what the client requires and we'll achieve that.

Gardner: It sounds as if this has been made of, by, and for a hybrid cloud model over time.

Lucini: Correct. Most of our big customers are hybrid, and we're delighted to serve them.

In the meantime, as they o go into a mode of using this stuff on Helion inside of the firewall, they'll still get all the elasticity that Helion provides them. They'll still get all the simplicity that REST and Web Services OnDemand provides them, and the flexibility that Vertica OnDemand provides them for scalability In some respects, there is no downside. There is absolutely no downside to anything that’s happening here. It’s just a matter of choice.

In terms of pricing, I think we're competitive. The features and functions are worth the spend.

Gardner: We'll get to our use cases and the examples of how this is being used shortly, but I just want to look at the competitive landscape. A big player out there, of course, in the public cloud is Amazon Web Services, and Amazon has what’s called http://aws.amazon.com/redshift/. It's their data warehouse in the cloud. How does what HP has announced compare and contrast to Redshift? Why is it a worthy competitor and is this price comparable?

Lucini: Of course, guys out there and everybody listening might know Vertica is a leading product in the analytics space and in the warehousing space. So we're coming  at this already as a leader proven inside the firewall.

You get all of the economies, flexibility, and features that Vertica provides; the Flex Zones, all of the optimizations, and the incredible scaling growth factors; and you get it in an on-demand package.

Just because we now have an on-demand version, these things don’t go away. It's quite the opposite. They're immediately available. In that respect, I think we have a strong proposal against Redshift, because you have all the features and functions, not only just the database itself. 

In terms of pricing, I think we're competitive. The features and functions are worth the spend. Our customer base, our history, and our legacy certainly prove that to be the case. Little by little, more and more of the features will seep in, and more customers will start to get comfortable with using it. We already have a few out there in beta land.

We're going to compete. Because of the features, the Flex Zones and other things, we'll carve our own space as well.

What is the differentiator?

Gardner: One of the things that seems unique to me, Fernando, is the IDOL OnDemand being so broad in terms of the types of media, content, information, and data that can now be brought into what’s essentially the type of analytics engine you would only think of for structured information. So it's the best of the structured analytics and high-performance environment, with that breadth and depth of the various types of content. Is that a differentiator in your opinion?

Lucini: Absolutely. I call it everything on-demand. As you notice, I tend not to differentiate between BOD and IOD. The whole philosophy was that we deal with unstructured, structured, and semi-structured information every day to build what we need for our businesses. So why should we see this differently?

If I happen to have an image, it's an image. If I happen to have a file, it's a file. If I happen to have an Excel sheet, it's an Excel sheet. All of these things are materially important. So let’s give our application developer and our data analyst a way to consume all this.

We have the connectors in the cloud, ways for you to suck information into the platform. We have the ability for you to index them and analyze them. We have some protected APIs for you to have a play around with.

It's as broad in analytics as possible. At the same time, it's still market leading in every single one of those APIs.

We have text-mining APIs. Obviously, this is a platform for us. So even though we're using the word Vertica and IDOL, underneath IDOL OnDemand, we have Vertica powering some of our features for user management. All our billing and other APIs are coming up.

It's all about giving the application developer all the tools. What the data is, isn't necessarily important. What's important is that they can process it, use it, extract as much value from it as possible, and make their business successful.

So you are absolutely right. It's as broad data-wise as possible. It's as broad in analytics as possible. At the same time, it's still market leading in every single one of those APIs, which is pretty cool stuff.   

Learn about big data analytics in the cloud
HP Vertica OnDemand
Request a 30-day free trial

Gardner: Now, when you're able to bring all sorts of information and media together, when you're able to tap web services, social media, when you're able to create a sentiment engine and a search engine capability, you're really starting to develop intelligence in new ways.

It seems to me, you can gain insight into markets, prospects, competition, customer inclinations, and directions. It's really about bringing more of a data-driven aspect to a business in ways that had really been sort of an art before, something that was not always by experience, but was by gut instinct.

Before we go to our use cases, how are we really changing a business environment here? Are we talking about a data-driven approach? Are we giving the type of tools that will move a marketing organization, for example, from guesswork into a scientific approach to how they make decisions?

Testing instincts

Lucini: You put it very nicely. We're moving into a world where we're allowing instincts to be tested, and tested quickly. In the past, we had a lot of clever professionals in the marketing world making educated guesses about what’s going on, what I like and don’t like, what you like and don’t like, or what’s popular and what’s not.

We're opening the door for businesses to take data, take a sample of it or take it all, it's their choice, whatever that may be, and in whatever varieties they come, to test out their theories, to see if this theory is correct.

I used to call it the CIO conundrum, where the CIO thinks they've got something and it becomes very difficult for them to prove if they do or don’t, and then they question the results when they get them.

We want them to be able to test this out. If they have an opportunity with their voice data and they think there's massive value in the voice data and they want to cross-correlate it to the social presence, do it, and let the data speak for itself.

It's very exciting stuff, because there is a real change in the industry, and we all have to adapt to it.

It's now no longer difficult. Just go into the platform, put the voice in there, put the text in there, use the analytics tools, give us our enterprise resource planning (ERP) warehouse. We'll do the queries and we'll create what we call combinations -- which is everything coming together as one -- and test the value.

Now, it no longer matters that this is not a very large project with very large budget. It will prove out the case. We have a next generation of proving things out and being capable of proving things out.

That might lead you to a very interesting onsite project with our tools, where you're inside a firewall, but you have proven it out. Or it might take you to a very interesting on-demand implementation. Either way you perform the testing or the proving or the thinking in a much more practical way.

It's very exciting stuff, because there is a real change in the industry, and we all have to adapt to it.

Gardner: Let's learn how some people have been using this already to change their business. Let's go first to RingDNA. Howard Brown, tell us a little bit about your company, what you do, and then how you've been using Haven OnDemand from HP?

Brown: RingDNA is a comprehensive sales acceleration platform that allows companies to create high-performance sales teams by combining powerful communications tools with prospect or customer DNA. That's a combination of marketing data, social data, customer relationship management (CRM) data, and account history, and pulling that all together to allow a sales rep to perform sales faster.

Data for inside sales

Gardner: It’s almost as if you're putting the tools of a data scientist in the hands of a salesperson without them having to be a scientist, to get all sorts of information to make the best call on a call in real-time on an inside sales basis.

Brown: You've got it. It's applying a scientific approach to sales. It's taking all of the data that exists out there which can be truly overwhelming, prioritizing it, and making it contextual to make sales much more effective.

Gardner: And this cuts across communications, as well as data, applications, and web services. Is that correct?

Brown: Absolutely. We apply both a theory-testing model and set of communication tools. When a RingDNA customer walks in in the morning, they know exactly who they should be calling, who they should be emailing or texting, and prioritizing the messages so that they know exactly who to call, how to reach out to them,  and what to say.

What HP IDOL OnDemand has provided us is the ability to test all kinds of theories, because every business we work with tends to have a different theory of what a hot prospect may be.

What’s so exciting is that you can start to understand buyer intent from marketing data from past interactions with your customers. We can look at voice transcripts and sentiment analysis and have a whole new way of determining who the right prospect is, how we should be contacting them, and with what messages.

Gardner: So it's up to your organization to take the best of technology, data, and analytics and empower those inside salespeople. It sounds like it's been up to HP to take the best of its technology in the cloud model and analysis to empower you. How, in fact, has HP empowered RingDNA with your early access use of HP Haven OnDemand?

Brown:  It's been truly game-changing. You nailed it when you talked abut taking business information and human information and combining those two. What HP IDOL OnDemand has provided us is the ability to test all kinds of theories, because every business we work with tends to have a different theory of what a hot prospect may be.

They can simply and easily test those theories using RingDNA and HP IDOL OnDemand. If there are buying signals, like someone visiting a website and downloading a whitepaper in combination with other factors, such as that person viewing web pages or maybe tweeting about their product or service, we can look at that buyer’s sentiment through HP IDOL OnDemand.

We're taking a bunch of this data, processing it through IDOL, and making our reps that much more productive and that much more powerful.

Gardner: One of the things you're doing is you are joining and bringing together very disparate data and information and tidbits of analysis. Is HP IDOL OnDemand doing that for you? Are you doing that? How do you make those joins that bring all that information together? Is the cloud the key to doing that?

Cloud is key

Brown: The cloud certainly is the key. We couldn’t deliver the type of product and service we do today without the cloud. RingDNA is all about accelerating a sales team’s ability to close deals. The last thing you want is to negatively impact those teams.

The cloud model means we can quickly implement a RingDNA process within an organization, bring in all that contextual data, bring in all that metadata, and make that rep that much more productive without negatively impacting their workflow.That’s critical to any business today.

It’s one thing to be able to deliver information. It’s another thing to be able to deliver information and insight without negatively impacting the business. Let's face it, in this  day and age, we can’t afford to slow down. With tools like IDOL OnDemand and RingDNA, you’re not slowing down teams. You're actually accelerating them beyond what you ever thought was possible.

Gardner: Fernando, as you're listening to Howard, is there anything about the way that RingDNA is using Haven OnDemand that you think highlights some specific benefits or values here. Are they a poster child for a certain type of way in which you can use Haven OnDemand?

With IDOL OnDemand coming on stream, we’ve found that we had a whole world of options opened up to us.

Lucini: Certainly they understand that they need to use tools to solve their problems and they go ahead and do it. In that respect, it’s great to see. There are a bunch of things we could learn as an industry from them in terms of seeing the opportunity of mixing two pieces of data, how these things collide, and how we get them to customers. I would challenge anybody to check them out because ultimately the end result is key, and I think everybody would be impressed.

Gardner: Let’s go to our next example. We're also joined by GateWest and Neal Holley. Neal, tell us a little bit about GateWest, what you do, and how you’ve been using HP Haven OnDemand.

Holley: We're HP Autonomy partners and have been since about 2002. During that time, we have deployed and maintained many IDOL-based systems. We provide a lot of support services to our clients on an annual basis. We also provide user interfaces to the core engine, our internal development team.

As well as enterprise search, we also specialize in knowledge management (KM). We have a couple of products addressing the management of knowledge, particularly within law firms, and recently we launched an application for the iTunes App Store providing mobile access to IDOL OnDemand, and we see this part of our strategy of what we’ve termed Mobile KM.

Gardner: Tell me a bit more about the iTunes App Store app. What is it called, and how did you use IDOL OnDemand to build it?

Holley: The app is called KnowGate and it was developed in direct response to the offering of IDOL OnDemand. Over the years, we’ve found that IDOL on-premise had a large cost of entry. Obviously, with IDOL OnDemand coming on stream, we’ve found that we had a whole world of options opened up to us. We were very surprised how straightforward it was to take the standard tools for producing the iPhone apps and iPad apps and interface them with IDOL OnDemand.

Great performer

It’s given us that opportunity to bring the technology that we've worked with for so many years and found to be such a great performer and hold the audience that we’ve always wanted to bring it to. The offering has allowed us to do that through its low cost of entry. As Fernando said, it’s democratizing the tools of the very large corporates that we've traditionally worked for.

Gardner: Help me to better understand this. There is no easier way to adopt a technology than to download it for a few dollars from the app store and instantly fire it up on your mobile device. If I were to download that app today, what would I be able to do with it? Who is the typical user? What is the function that that they would gather from it?

Holley: The typical user is predominantly a business user. The first instance is that you would be able to access your KM, your valuable documents or your key information that you need whether in a law firm, or whether it's engineering specifications or your latest contracts.

That’s the first element of it. The second element is being able to actually capture knowledge while on the move and being able to take information from an email or take a photograph of a document, OCR it, and then be able to ingest that into IDOL OnDemand and share it with the rest of your organization.

So it really opens up that kind of ability, and of course, once it’s shared it becomes valuable.

So it really opens up that kind of ability, and of course, once it’s shared it becomes valuable.

Gardner: Very interesting. Fernando, we're seeing with GateWest, this joining of the cloud model with the mobile model. How is that accelerating the use of analytics? That is to say, an application that can gather data and information and extend it to the cloud and then the cloud can create an analytics value and then send it back to that mobile device? How are you seeing that as a powerful new way of broadening the use and value of analytics in general?

Lucini: If you think about it, mobility is everywhere. We all create mobility and mobility apps for everything you have. I'm sure you guys walk around with a mobile device.

We have to be very clear that all of our consumers, even if it's enterprise-consumers versus consumer-consumers, all become little data analysts. We're all much better versed on information than we ever were.

Now you see 18 year-old kids or 20 year-old kids coming out of university and their ability to manage information in their devices, in their environment, is incredible. You no longer have a situation where you can associate analytics from mobile.

Mobile apps are mostly about analytics with some description, certainly about adding value to the data that a user asks you to create it. When I say "create it," I mean create it indirectly, create it by the motion on your wrist, versus you directly writing something down. So you get these two sources of data.

But it's certainly now such a rich space. Let me give you an example. You can take what's coming out of the back of a device, which is probably machine-driven, all the stuff that really the machine produces. You can put that in Vertica OnDemand and that will be your warehouse for doing the analysis on that: What am I doing, when, how, for how long, all that kind of jazz.

Creating context

At the same time, I'm producing the information directly from my mind. I'm creating context, I'm writing, I'm speaking, or I'm recording, whatever the case may be. Now, IDOL OnDemand can deal with that.

Anybody creating a mobile app is not going to want to have a hard server-based infrastructure, because the whole point of mobility is that it is distributed. It is a distributed computing model.

Those are kind of solutions that are on demand, in the cloud, elastic, pay-as-you-go kind of things. They're perfect for this generation, whether it's enterprise or not. The kind of partners we have are guys who understand that their intelligence and the value they add is not necessarily that they know a tool, but that they are the experts in their space and they know how to balance Vertica OnDemand.

I have my machine or business information and I need to do something important with that. I have my human information and anything in between, and it's the understanding of how this information adds values to people’s lives and how they execute them that’s he key.

The beauty of our OnDemand infrastructure is that it was created for everyone. It was created for our customers and it was created for ourselves.

So it's a really important moment. Mobile is the linchpin of much of what's going on around this that makes sense. If you look at any company today, there's no chance that they won't have a mobile intent.

At the same time, we have a lot of hackathons in OnDemand. I can tell you that 90 percent of the products that are created as a result of hackathons are mobile. It kind of speaks for itself.

Gardner: I know. The combination of the cloud-delivery model, analysis on demand, or as a service and the mobile device is just creating entirely new opportunities to add value as a consumer and as a company. It's really flipping many businesses around.

Let’s look at a particular business when we think about the impact of this new series of models and how they interact. I'm thinking about the IT organization in a company, in an enterprise.

With HP Software having a very broad portfolio of applications, many of which are designed and geared towards those IT organizations and developer organizations in companies, how can Haven OnDemand with that analysis-as-a-service capability be brought to bear on other HP software applications focused on IT organizations?

Lucini: The beauty of our OnDemand infrastructure is that it was created for everyone. It was created for our customers and it was created for ourselves. Not to unveil too many wonderful things, but there will be a number of announcements of our own tools, which will be powered by OnDemand. And we made a distinction of what is on demand versus what we call core. It’s our language to speak about our internal use versus our external use.

Organizational tools

These are tools that help the IT organizations.We have tools for backup, where the on-demand model will add great flexibility to what the IT operators can do with the information and how they can serve the legal compliance and partner infrastructures.

We have uses of OnDemand for a wider HP software family where they provide analytics, both for security as well as operational systems, and things like that. So it's a very democratic tool. We recognize that the world of information pivots on two things, and that’s why we created a platform.

It pivots on our ability to incredibly scale up and analyze structured information and semi-structured information. That’s why we have a Vertica core engine. We recognize that human beings create information and so we have our IDOL infrastructure.

And it's these two things together that every single one of our internal partners, IT, our own software product that tender to IT as well as external customers only to leverage this product. And then in some cases it goes very heavily one way, or very heavily another, you have a very, very strong warehouse.

All of our internal partners look at us and say that they're coming at it either from very human or from very machine, or actually in most cases, both.

You always have that road-map of possibility to get you to the other side, either more heavily toward IDOL or Vertica. You can really start, for example, with a Vertica OnDemand warehousing cloud, make it super-flexible, and put information in Flex Zones, really massage that data, don’t be upset by schemas,  and then work as you go, and scale up.

At the same time, think of what if you need some enrichment, what if you need to take some information that’s coming in and asking to say take in your social feed. So I need to take a voice feed and text information, classify it, and put it into my Flex Zones. That is available, and in the opposite direction, it’s exactly the same.

All of our internal partners look at us and say that they're coming at it either from very human or from very machine, or actually in most cases, both. This is the roadmap to get them to take advantage of both in the same platform. So you can see, it's very, very compelling for our internal partners to use, and we are delighted to serve them.

Gardner: I'm seeing a great deal of flexibility on the applicability of this. We've seen from RingDNA how this can help an inside sales organization do things they just could never have done before.

We have seen from GateWest how this is essential to bringing knowledge management and document management to a whole new level by combining the best of cloud and mobile devices.

Then, as you're now saying, we're only scratching the surface about how IT organizations can use the cloud and the analytics as a service for improving their application lifecycle management, their business service management, or their application development test. So it's really an exciting time.

I'm afraid we are about out of time for today’s discussion, but there's a lot more that people can learn at hp.com in terms of Haven OnDemand. Let’s just end with one more peek into the future. Fernando, what might we expect next? Where do you think Haven OnDemand will go in the near future in terms of a new type of business value?

Disrupting markets

Lucini: Let me just say that we're going to disrupt a bunch of markets. We're going to be looking to take over some markets out there that have been very traditionally on premise and we're going to try to democratize it. You can guess that we're going to take the world of video and voice and we are going to make that very democratic.

There are going to be lots of interesting things coming out where we're going to allow our customers to create their own APIs and extend the platform themselves. So there is a lot of that to look forward to.

Learn about big data analytics in the cloud
HP Vertica OnDemand
Request a 30-day free trial

We'll also be extending our Vertica OnDemand presence, getting more-and-more customers in there and getting more modes, using more of our Vertica technology to add functionality in a REST kind of way, in a web-service kind of way to the on-demand picture, and adding more and more APIs just to reflect the richness of a platform. So it's clear to everyone that this is only the beginning of an amazing story. So there are quite a lot of APIs, but there are many, many more to come. So there is quite a lot to look forward to.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  Fernando Lucini  Howard Brown  HP  HP Haven  HP OnDemand  HPDiscover  IDOL  Interarbor Solutions  Neal Holley  Vertica Marketplace 

Share |
PermalinkComments (0)
 

Hortonworks accelerates the big data mashup between Hadoop and HP Haven

Posted By Dana L Gardner, Monday, December 01, 2014

This latest BriefingsDirect deep-dive big data thought leadership interview examines how Hortonworks is working with HP on improved management of very large -- and very active -- datasets.

We'll explore how HP and Hortonworks are integrating Hadoop into more of the HP Haven family to make it easier for developers and data scientists to access business intelligence (BI) and analytics as a service.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn how, BriefingsDirect sat down with Mitch Ferguson, Vice President of Business Development at Hortonworks at the recent HP Big Data 2014 Conference in Boston. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We heard the news earlier this year about HP taking a $50-million stake in Hortonworks, and then about Hortonworks' IPO plans. Please fill us in little bit about why Hortonworks and HP are coming together.

Ferguson: There are two core parts to that answer. One is that the majority of Hadoop came out of Yahoo. Hortonworks was formed by the major Hadoop engineers at Yahoo moving to Hortonworks. This was all in complete corporation with Yahoo to help evolve the technology faster. We believe the ecosystem around Hadoop is critical to the success of Hadoop and critical to the success of how enterprises will take advantage of big data.

Ferguson

If you look at HP, a major provider of technology to enterprises, both at the compute and storage level but the data management level, the analytics level, the systems management level, and the complimentary nature of Hadoop as part of the modern data architecture with the HP hardware and software assets provides a very strong foundation for enterprises to create the next generation modern data architecture.

Gardner: I'm hearing a lot about the challenges of getting big data into a single set or managing the large datasets.

Fully experience the HP Vertica analytics platform...
Get the free HP Vertica Community Edition

Become a member of myVertica

Users are also trying to figure out how to migrate from SQL or other data stores into Hadoop and into HP Vertica. It’s a challenge for them to understand a roadmap. How do you see these datasets as they grow larger, and we know they will, in terms of movement and integration? How is that path likely to unfold?

Machine data

Ferguson: Look at the enterprises that have been adapting Hadoop. Very early adopters like eBay, LinkedIn, Facebook, and Twitter are generating significant amounts of machine data. Then we started seeing large enterprises, aggressive users of technology adopt it.

One of the core things is that the majority of data being created everyday in an enterprise is not coming from traditional enterprise resource planning (ERP) or customer relationship management (CRM) financial management systems. It's coming from websites like Clickstream, data, log data, or sensor, data. The reason there is so much interest in Hadoop is that it allows companies to cost effectively capture very large amounts of data.

Then, you begin to understand patterns across semi-structured, structured, and unstructured data to begin to glean value from that data. Then, they leverage that data in other technologies like Vertica, analytics technologies, or even applications or move the data back into the enterprise data warehouse.

As a major player in this Hadoop market, one of the core tenets of the company was that the ecosystem is critical to the success of Hadoop. So, from day one, we’ve worked very closely with vendors like Microsoft, HP, and others to optimize how their technologies work with Hadoop.

SQL has been around for a long time. Many people and enterprises understand SQL. That's a critical access mechanism to get data out of Hadoop. We’ve worked with both HP and Microsoft. Who knows SQL better than anyone? Microsoft. We're trying to optimize how SQL access to Hadoop can be leveraged by existing tools that enterprises know about, analytics tools, data management tools, whatever.

That's just one way that we're looking at leveraging existing integration points or access mechanisms that enterprises are used to, to help them more quickly adopt Hadoop.

The technology like Hadoop is optimized to allow an enterprise to capture very, very large amounts of that data.

Gardner: But isn’t it clear that what happens in many cases is that they run out of gas with a certain type of database and that they seek alternatives? Is that not what's driving the market for Hadoop?

Ferguson: It's not that they're running out of gas with an enterprise data warehouse (EDW) or relational database. As I said earlier, it's the sheer amount of data. By far, the majority of data is not coming from those traditional ERP,  CRM, or transactional systems. As a result, the technology like Hadoop is optimized to allow an enterprise to capture very, very large amounts of that data.

Some of that data may be relevant today. Some of that data may be relevant three months or six months from now, but if I don't start capturing it, I won't know. That's why companies are looking at leveraging Hadoop.

Many of the earlier adopters are looking at leveraging Hadoop to drive a competitive advantage, whether they're providing a high level of customer service, doing things more cost-effectively than their competitors, or selling more to their existing customers.

The reason they're able to do that is because they're now being able to leverage more data that their businesses are creating on a daily basis, understanding that data, and then using it for their business value.

More than size

Gardner: So this is an alternative for an entirely new class of data problem for them in many cases, but there's more than just the size. We also heard that there's interest in moving from a batch approach to a streaming approach, something that HP Vertica is very popular around.

What's the path that you see for Hortonworks and for Hadoop in terms of allowing it to be used in more than a batch sense, perhaps more toward this streaming and real-time analytics approach?

Ferguson: That movement is under way. Hadoop 1.0 was very batch-oriented. We're now in 2.0 and it's not only batch, but interactive and also real-time, and there's a common layer within Hadoop.  Hortonworks is very influential in evolving this technology. It's called YARN. Think of it as a data operating system that is part of Hadoop, and it sits on top of the file system.

Via YARN, applications or integration points, whether they're for batch oriented applications, interactive integration, or real-time like streaming or Spark, are access mechanisms. Then, those payloads or applications, when they leverage Hadoop, will go through these various batch interactive, real-time integration points.

They don't need to worry about where the data resides within Hadoop. They'll get the data via their batch real-time interactive access point, based on what they need. YARN will take advantage of moving that data in and out of those applications. Streaming is just one way of moving data into Hadoop. That's very common for sensor data. It’s also a way to move it out. SQL is a way, among others, to move data.

Fully experience the HP Vertica analytics platform...
Get the free HP Vertica Community Edition

Become a member of myVertica

Gardner: So this is giving us choice about how to manage larger scales of data. We're seeing choice about the way in which we access that data. There's also choice around the type of the underlying infrastructure to reduce costs and increase performance. I am thinking about in-memory or columnar.

What is there about the Hadoop community and Hortonworks, in particular, that allows you to throw the right horsepower at the problem?

Ferguson: It was very important, from Hortonworks perspective from day one, to evolve the Hadoop technology as fast as possible. We decided to do everything in open source to move the technology very quickly and leverage the community effective open-source, meaning lots of different individuals helping to evolve this technology fast.

The ability for the ecosystem to easily and optimally integrate with Hadoop is important. So there are very common integration points. For example, for systems management, there is the Ambari Hadoop services integration point.

Whether it's an HP OpenView or System Center in the Microsoft world, that allows it to leverage, manage, or monitor Hadoop along with other IT assets that those management technologies integrate with.

Access points

Then there's SQL's access via Hive, an access point to allow any technology that integrates or understands SQL to access Hadoop.

Storm and Spark are other access points. So, common open integration points well understood by the ecosystem are really designed to help optimize how various technologies at the virtualization layer, at the operating system layer, data movement, data management, access layer can optimally leverage Hadoop.

Gardner: One of the things that I hear a lot from folks who don't understand yet how things will unfold, is where data and analytics applications align with the creation of other applications or services, perhaps in a cloud setting like a platform as a service (PaaS).

It seems to me that, at some point, more and more application development will be done through PaaS with an associated or integrated cloud. We're also seeing a parallel trajectory here with the data, along the same lines of moving from traditional systems of record into relational, and now into big data and analytics in a cloud setting. It makes a lot of sense.

What a number of people are doing with this concept is called the data lake. They're provisioning large Hadoop clusters on prem, moving large amounts of data into this data lake.

I talked to lot of people about that. So the question, Mitch, is how do we see a commingling and even an intersection between the paths of PaaS in general application development and PaaS in BI services, or BI as a service, somehow relating?

Ferguson: I'll answer that question in two ways. One is about the companies that are using Hadoop today, and using it very aggressively. Their goal is to provide Hadoop as a service, irrespective of whether it's on premises or in the cloud.

Then we'll talk about what we see with HP, for example, with their whole cloud strategy, and how that will evolve into a very interesting hybrid opportunity and maybe pure cloud play.

When you think about PaaS in the cloud, the majority of enterprise data today is on premises. So there's a physics issue of trying to run all of my big data in the cloud. As a result, what a number of people are doing with this concept is called the data lake. They're provisioning large Hadoop clusters on premises, moving large amounts of data into this data lake.

That's providing data as a service to those business units that need data in Hadoop -- structured, semi-structured, unstructured for new applications, for existing analytics processes, for new analytics processes -- but they're providing effectively data as a service, capturing it all in this data lake that continues to evolve.

Think about how companies may want to leverage then a PaaS. It's the same thing on premises. If my data is on premises, because that's where the physics requires that, I can leverage various development tools or application frameworks on top of that data to create new business apps. About 60 percent of our initial sales at Hortonworks are new business applications by an enterprise. It’s business and IT being involved.

Leveraging datasets

Within the first five months, 20 percent of those customers begin to migrate to the data-lake concept, where now they are capturing more data and allowing other business entities within the company to leverage these datasets for additional applications or additional analytics processes. We're seeing Hadoop as a service on premises already. When we move to the cloud, we'll begin to see more of a hybrid model.

We are already starting to see this with one of Hortonworks large partners, where you put archive data from on premises to store in the cloud at low-cost storage. I think HP will have that same opportunity with Hadoop and their cloud strategy.

Already, through an initiative at HP, they're providing Hadoop as a service in the cloud for those entities that would like to run Hadoop in a managed service environment.

We're seeing Hadoop as a service on prem already. When we move to the cloud, we'll begin to see more of a hybrid model.

That’s the first step of HP beginning to provide Hadoop in a managed service environment off premises. I believe you'll begin to see that migrate to on-prem/off-prem integration in a hybrid opportunity in the some companies as their data moves off prem. They just want to run all of their big-data services or have Hadoop as a service running completely in HP cloud, for example.

Gardner: So, we're entering in an era now where we're going to be rationalizing how we take our applications as workloads, and continue to use them either on premises, in the cloud, or hybrid. At the same time, over on the side, we're thinking along the same lines architecturally with our data, but they're interdependent.

You can’t necessarily do a lot with the data without applications, and the applications aren’t as valuable without access to the analytics and the data. So how do these start to come together? Do you have a vision on that yet? Does HP have a vision? How do you see it?

Ferguson: The Hadoop market is very young. The vision today is that companies are implementing Hadoop to capture data that they're just letting fall on the floor. Now, they're capturing it. The majority of that data is on premises. They're capturing that data and they're beginning to use it in new a business applications or existing analytics processes.

Fully experience the HP Vertica analytics platform...
Get the free HP Vertica Community Edition

Become a member of myVertica

As they begin to capture that data, as they begin to develop new applications, and as vendors like HP working in combination with Hortonworks provide the ability to effectively move data from on premises to off premises and provide the ability to govern where that data resides in a secure and organized fashion, you'll begin to see much tighter integration of new business or big-data applications being developed on prem, off prem, or an integration of the two. It won't matter.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  big data  BriefingsDirect  Business Intelligence  Dana Gardner  Hadoop  HAVEn  Hortonworks  HP  HP DISCOVER  HP Vertica  Interarbor Solutions  Mitch Ferguson 

Share |
PermalinkComments (0)
 

HP simplifies Foundation Care Services to deliver just-in-time, pan-IT tech support

Posted By Dana L Gardner, Monday, November 24, 2014

Much of the attention to coping with mega IT challenges such as cloud, bring your own device (BYOD), mobile applications, and big data focuses on adoption and implementation strategy. Yet the added complexity and requirements of how to support these technologies once they are in place has now also become top of mind.

So how do enterprises deliver improved user experiences, leverage new reactive support tools and diagnostics, and increasingly rely on self-help and automation to keep their far-flung systems and services fully functional?

 Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

BriefingsDirect recently sat down with an HP Technology Services Executive to chart a better path to simplified, just-in-time, and pan-IT support improvements -- despite dynamic and complex IT environments. Lou Berger, Vice President of Technology Services Enablement and Readiness in the HP Enterprise Group, took some questions from me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are some of the key trends and drivers that are impacting the reactive IT support services market?

Berger: Data center managers and CIOs are entrusted with managing the current legacy environment they have while transitioning to address all the new trends: cloud, mobile, big data, and BYOD. They're all asking the data center to change and transform to address these things.

Berger

These new workloads are different, requiring new infrastructure and new strategies that are unpredictable. They're changing quickly -- and in the largest, hybrid cloud, with increased complexity. Hosting versus the legacy infrastructure is impacting the decisions and the requirements CIOs have to manage in their environments. This is obviously giving them more choices, but also adding more complexity.

The current state of affairs for a CIO is a very complex mix of technologies, supporting the old, while developing the new. They have new solutions that they're building on their own. They're buying solutions, converging their infrastructure, and really being asked to make choices now that are going to lead them into the future.

Gardner: And is this the case, Lou, for both enterprises as well as SMBs? Is there any difference between those two markets when it comes to IT support?

Impactful decisions

Berger: Not at all. The decisions are the same. The SMBs are looking to the future to save and optimize their environment and making the same exact decisions that the enterprises are making. Perhaps they're moving at different speeds, some more agile and innovative than others, but everybody is being forced to make the same decision.

Gardner: It seems that expectations have changed. End-users, from their consumer devices or at-home systems, are used to getting rapid support and help. Have the expectations of the end-user shifted?

Berger: In the new world, always-on is really the keyword, and data center managers' service-level agreements (SLAs) with their customers, the end-user, are at a much higher level. Access is always expected to be there for the full community of users, from the developers, to the actual customers, and then the end-users on the outside. The world today is 24x7 and with all the changes happening at the same time, they need to support that environment.

Gardner: So we're all adapting, we're all changing, and HP has adapted and changed as well. Maybe you could fill us in a little bit at a high level of what has changed with Foundation Care Services, and then also how reactive support fits into a wider panoply of all support choices?

We totally revamped our portfolio about two years ago to really enable this new style of IT.

Berger: At HP, we took a hard look at our service portfolio, not only Foundation Care, but across the whole portfolio, and we looked at how we were addressing customers’ needs in the current environment, and how we needed to look forward as the world was changing to meet the needs we just discussed. We totally revamped our portfolio about two years ago to really enable this new style of IT.

The first thing we did was simplify. CIOs have to make very difficult choices based on meeting the SLAs of the customers, of the environment, of each solution and each component of that solution, and then balance that against the cost of those things. We took a look at and simplified our portfolio.

The first thing was simplify it to make the choices easier for the CIOs. We broke the portfolio into three basic portfolio items. One was Foundation Care, the base of all service, the reactive parts of the service, the first decision the CIO has to make.

Second was adding Proactive Care, the ability for CIOs to add and make the decision of how much proactive support they wanted to add for the specific environment and the solutions they were building.

Finally, built on that, Datacenter Care, which combines all the options that we can make available to a customer to tailor to their specific needs for either their solutions or environment.

Simplifying support

When we talk about Foundation Care, we looked at our portfolio and we realized it was extremely complex for something that seems simple as reactive support. We had over 18 offerings that we were making available for customers, adding confusion to the decision-making process. Then finally we looked at our SLAs to them, and how they combined that in combination to manage complex environments.

In our new Foundation Care portfolio, we've narrowed it down to five offerings only, with three response choices for the customer to decide. This way, a customer can make very easy choices to understand the more fundamental decisions they need to make on reactive support -- what response time they want, what coverage window they want, and the length of term they want for that service before they review for renewal. It's a very simple decision-making practice.

We then took a look at our Foundation Care, and what customers required to manage those environments and made those things available through our call centers and our portals. So customers can understand very easily on a component level or across their environment what’s available through the services they've already been provided and with the SLA we have with them, so they can manage their environments.

There's an ability to be able to use their mobility tools to assess and understand exactly the state of their environments or the devices that they have connected to us and our support that we highly recommend because of the value it brings.

We removed complexity and we provided management and operational tools for customers to use on this foundational service.

So we removed complexity and we provided management and operational tools for customers to use on this foundational service.

Gardner: This sounds like it aligns very well to some of these trends we mentioned -- consumer behavior and expectations. Many people like the idea of self-help, of getting the right information that they can act on. Of course, they like to get it on a mobile device, which gives them flexibility and that 24x7 ability to track and manage.

Lou, one of the things that seems different nowadays is the ability for automation to play a larger role. How are your customers and HP adjusting to trying to automate some of these things, maybe through alerts and notifications, maybe through remote access in understanding systems regardless of where they are? What's the newest on that level, that automation capability?

Berger: As you know, HP has always invested heavily in the connected devices, the ability for us to securely connect to a customer’s environment, each one of those devices, and monitor. For those devices that we monitor this way, our time-to-repair is significantly faster than a straight call-in with no device.

That connectivity allows us to do much more than that. It allows us to communicate information that we're capturing for the customer to actually see, using mobility devices, on the health and the state of devices themselves and, in many cases, the configuration.

It allows us to understand failures, repair them quickly on behalf of customers, and notify customers of an issue so that we can work with them to repair.

Connected experience

These tools allow us to understand the state of their environment. So as we move up the proactive stack, we can help them understand and do preemptive maintenance, understanding what recommendations we make based on the devices, on doing upgrades to firmware and software, of the compatibility among the environment, of the key parts of the environment to the whole solution, and help make recommendations and keep their devices healthy.

The connected world is a key part of our strategy in helping customers manage through the complexity of new environments. Of course, the information we track becomes available to customers to help them manage their environments independently, both their SLAs, their contractual information, and the broader environment.

Gardner: And not only are we dealing with rapid change and complexity, but heterogeneity remains with us, as it has all along. When we talk about doing this support with updates, patches, and firmware, we are not just talking about one company or one vendor. We're talking about whatever your environment has and whatever you need. Is that not correct?

Berger: That’s very correct. When HP develops a compatibility matrix, these are the things we apply in helping customers be preemptive and make the decisions on the best way of managing their environment and staying up-to-date in the healthiest way.

Our connectivity is highly secure. It's been tested, agreed, and approved across every tier of business and every type of business.

Gardner: So you would be able to cover the entire fabric of your environment, not just parts and pieces, and that’s essential? You can’t have those cracks where things fall between or where patches don’t get made. That’s where these real problems can arise.

I have to also imagine, Lou, that this has the interest of the security and the governance, risk and compliance (GRC) people. This is another way for them to get assurance that things will continue not only performing, but performing securely. How does GRC and security fit into the services portfolio?

Berger: First -- and it's most relevant when we talk about security -- our connectivity is highly secure. It's been tested, agreed, and approved across every tier of business and every type of business, from the financial industries, to the government agencies. These have set the bar very high for security. So you can rest assured that our connectivity is a very secure and comfortable connection.

The compliance of environments in this new world is imperative. As CIOs make the decision at each product, at the solution, or at the environment, and how they meet their SLAs, they make the decision on how many proactive elements they want to add to that support. Providing these types of reports, or an enhanced call experience, across the environment, rather than at the piece level, adds to our ability to and the customer’s ability to manage environment to those compliance levels.

Again, the goal of the new portfolio was to simplify and make clear what each level of the portfolio gave in deliverables, and how that translates to value and the ability for the CIO to make these decisions and then meet their compliance requirements.

We stage our portfolio in a way that allows CIOs to make the right decisions to meet their compliance and security needs at the optimum cost for them.

Information is key

Gardner: So a key to good support, of course, is getting the right information to the right people in the right time-frame. We've talked a bit about the timing being very rapid, and the means to get that information being somewhat automated, with more mobility. But the information is still key.

So how do we improve the information flow? I understand the HP Support Center has been revamped to a certain degree as well. So that part of the equation, the information, is also rich, up-to-date, and easily available?

Berger: At a Foundation Care-level support, a customer has the option to only call the call center on a problem to be fixed and get the full support experience that comes from there. They also have access to our product pages, where they get specific information, access to our drivers, software and firmware, and the ability to download software, firmware and drivers on their own, which often includes both fixes and new features and functionality.

They have the ability to search the HP Support Center, which has all the content repositories for answers to support questions; guided troubleshooting, which provides step-by-step ability for our customers to self-heal themselves. And the Support Community, and our HP Forums allows our customers to interact with peers and learn how others dealt with issues and best practices.

In addition, we have 24x7 chat from our HP Support Specialists, which is available either from the mobile app or from a PC.

You have the Support Case Manager, where a customer can call in at any time and understand exactly what the state of open cases is. So if a case is in progress of being fixed, they can call in. Or they can use the mobile app, which allows automated updates.

In addition, we have 24x7 chat from our HP Support Specialists, which is available either from the mobile app or from a PC. And the full suite of solutions and technical manuals that are available to a customer for support.

Gardner: Of course, HP being a global company that means that these services are available around the world, with localization issues managed. What’s the breadth and depth in terms of that applicability to different markets and different languages?

Berger: HP’s greatest strength for a global customer is it's 24x7 worldwide support. We have Support Centers in every region. We have local language support for all our customers in every country necessary. We have the full suite of access and the same customer experience in any place in the world. That is the strength of HP.

Gardner: Okay, we've talked a lot about what it does. I think it's always great to show in addition to tell. Do we have any examples where we can point to an organization, large or small, one market or another, and demonstrate how they're using the simplified Foundation Care Services, getting some benefits, making sure that all the systems are up and running, and if not, the fix is in right away?

Foundation Care services

Berger: Foundation Care, our reactive services, is the base of all services. I will stick to that as an example. The first is a UK-based IT service company, a holding company for a group of companies involved in the provision of real-time monitoring systems and data management services, specifically the UK's leisure and forecourt petrol services.

The customers were looking to upgrade their IT infrastructure to handle growth in their customer demand. Our solution from the product side was to deploy converged infrastructure with HP Blades and Virtual Storage.

The customers’ requirements were met with HP Foundation Care Support. This is a very stable environment. It’s a converged infrastructure, but there are times when an anomaly can arise.

For example, in a connected world, the customer’s storage device sent a message stating that a driver is about to fail. Under Foundation Care, the driver was sent to the customer, preventing an issue before it happened. It's a different experience for many of our competitors, because we monitor the converged infrastructure and we take proactive actions versus waiting for the problem to occur.

A quote from the customer, “HP Support was fantastic. We were protected all the way through the support processes.”

So we recognized an issue. We proactively notified the customer. We sent the fix or sent a CE to fix their problem. We helped this customer meet their SLA at 99.98 percent uptime. In this case, we gave them a 100 percent uptime.

A quote from the customer, “HP Support was fantastic. We were protected all the way through the support processes.”

Gardner: Any other examples?

Berger: Sure. In this case, we helped a customer shift their focus from maintenance to strategic activities. HP offered a differentiated support experience by providing proactive alerts to flag potential issues.

The customer in this case is an underwriting services using proprietary databases and algorithms to estimate people’s life expectancy based on their medical records. The customer had performance issues with the large amounts of data on different services, with various hard drive configurations and several direct-attached devices for storage.

The resolution was to modernize their data center, where we worked closely with the customer, consolidating servers and storage using server virtualization and SAN technology. We installed ProLiant Server and 3PAR Storage and the customer purchased Foundation Care 24x7 support services.

The benefits were that centralized storage provided reliability and productivity. For the customer, their IT staff previously spent about 70 percent of their time dealing with infrastructure. Now, they spend only 20 percent of the time. That's a 50-percent saving in time.

With Foundation Care support, they now manage availability better with proactive support alerts on potential issues and focusing on improving applications rather than failures.

Managing costs better

Gardner: Lou, I've been tracking enterprise IT for quite some time now, and the question always comes up, "What do you get for your dollar or your peso or your Euro?" I have had trouble always coming up with return on investment (ROI) or a total cost of ownership (TCO) formula for some aspects of IT, for example, investing in modernization of a data center.

It's more the soft quality-assurance issues, but it seems to me, the economics of something like technology services and Foundation Care in particular is pretty straightforward.

What do you tell people when they ask you about the ROI here? It seems if you catch one big issue and you're in an always-on environment. That that can really save you a great deal of money very rapidly.

Berger: In any industry, an outage translates to revenue and cost, besides the customer satisfaction issues and everything else. There are studies that go back and say that in some industries, an outage, a long outage can actually put a company out of business in a very short amount of time.

Very fast response time is critical to the business. We commit to a six-hour call-to-repair.

But this does play very closely to the decision the CIO must make when they choose the support, and understanding the impact on their environment and understanding the crux of the business if they don't meet those needs.

In the Foundation Care Services portfolio we have three response levels with most customers. So Call-to-Repair is the highest level of service. Very fast response time is critical to the business. We commit to a six-hour call-to-repair.

It's our broadest coverage. We have 24x7 coverage, and take four hours, generally to fix a customer’s problem, with full access to our Support Centers and on-site service as part of the coverage.

Most economical would be Foundation Care Next Business Day, with coverage from 8 a.m. to 5 p.m., Monday to Friday. So a CIO can make decisions, based on the SLA they have and the impact to the business, whether critical or not, and apply these very simple service choices -- rather than the 18 we had before.

Gardner: So even though you've simplified, you still have the benefit of one size doesn't need to fit all. For example, I might have a set of applications or even a small rack or data center that doesn't require that higher level of oversight, and I might want to tier this. That gives me a lot more flexibility, and therefore I can manage my costs better. Is that the case?

Berger: That’s exactly the reason we did it. A data center manager, a CIO, can make the right decisions, at the right cost profile to meet his business needs, and optimize his decision making. Then, he can manage and understand by using the tools we provide to understand exactly what is covered for each one of those devices at any time. So they can understand if they are still meeting those needs as times change and customs change.

Looking to the future

Gardner: Looking to the future a little bit Lou, as we mentioned at the beginning, we have a lot of change. You mentioned it earlier, but we're looking at a lot more converged-infrastructure capabilities, particularly for big data.

We're looking at more use of hybrid and more types of cloud, platform as a service (PaaS), software as a service (SaaS), moving workloads from cloud to cloud, if we can do that in the future; the Internet of Things; the scale of the data and the amount of data and streaming data rather than static or batch data.

How do these things come to bear? What is your vision for how technology services adjust, given what we're expecting to happen over the next several years?

Berger: I hope you can see from the way we developed our portfolio that our Foundation Care Services allow the customer to make the most basic decisions on these requirements.

Proactive Care service allows a customer to call in across a variety of products. We own that problem and solve the problem across the solution.

We added Proactive Care service, which allows the customer to add further coverage based on the same parameters, adding preemptive support for those areas, environments, and solutions that require a greater uptime, a greater sense of security, and an enhanced call experience that includes solutions support.

Proactive Care service allows a customer to call in across a variety of products. We own that problem and solve the problem across the solution.

Then building into our Datacenter Care service, which was built including Foundation Care and the Proactive Care services, and allowing the customer to add elements specific to meeting their specific requirements, many of them now being built specifically for the new style of IT.

We also have a Cloud Hybrid Support offering, specific for this new style of IT. And different opportunities for customers to translate CAPEX into OPEX through support offerings, because many of the customers who are building on-premise clouds and converged infrastructure want the same experience from a financial point of view as moving to a hosted service. We built that into our Datacenter Care service.

As we move forward, the new style of IT, then DevOps requires agility, velocity, innovation, and continuous service. We're tailoring new offerings specific to that audience, specific to meet those requirements that we will partner closer and closer with customers on meeting those specific needs, and as always built on a Foundation Care support for their environment, too.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  HP  HP Foundation Care Services  HP Support  Interarbor Solutions  Lou Berger 

Share |
PermalinkComments (0)
 

HP Analytics blazes new trails in examining business trends from myriad data

Posted By Dana L Gardner, Monday, November 17, 2014

The next BriefingsDirect deep-dive big data thought leadership interview examines how HP analyzes its own vast data warehouses to derive new insights for its global operations, extensive supply chain, sales organization, global marketing groups, and customers.

We'll explore how the Analytics Group at HP, based in India, sifts through myriad internal data sources, as well as joins with other public data sets, to deliver entirely new intelligence value that helps make business more responsive and efficient.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn how, BriefingsDirect sat down with Pramod Singh, Director of Digital and Big Data Analytics at HP Analytics in Bangalore, India, at the recent HP Big Data 2014 Conference in Boston. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us a little bit about the Analytics Group at HP, what you do, and what’s the charter of your organization.

Singh: We have a big analytics organization in HP, it’s called Global Analytics and serves the analytics for most of HP. About 80 to 90 percent of the analytics happening inside of HP comes out of this eco-system. We do analytics across the entire food chain at HP, which includes the supply chains, marketing, and sales.

What I personally lead is an organization called Digital Analytics, and we are responsible for doing analytics across all digital properties for HP. That includes the eCommerce, social media, search, and campaign analytics. Additionally, we also have a Center of Excellence for Big Data Analytics, where we're using HP’s big-data technologies, which is that framework called HAVEn, to help develop big-data solutions for HP customers, as well as internal HP.

Fully experience the HP Vertica analytics platform...

Become a member of myVertica

Gardner: Obviously, HP is a very large global company. What sort of datasets are we talking about here? What’s the volume that you're working with?

Data explosion

Singh: As you know, a data explosion is happening. On one end, HP has done a very good job over the last six to seven years of getting most of their enterprise data into something called an enterprise data warehouse. We're talking about close to two petabytes of data, which is structured data.

Singh

The great part of this journey is that we have taken data from 700-800 different data marts into one enterprise data warehouse over the last three to four years. A lot of data that is not part of the enterprise is also becoming an important part of making the business decisions.

A lot of that data I personally deal with in the digital space, is what we call the human-generated data, the social media data, which no enterprise owns. It’s open for anybody to go use that. What I've started to see is that, on one hand, we've done a really good job of getting data in the enterprise and getting value out of it.

We've also started to analyze and harvest the data that is out in the open space. It could be blogs, Twitter feeds, or Facebook data. Combining that is what’s bringing real business value.

The Global Analytics organization is more than 1,000 people spread through different parts of the world. A big chunk of that is in Bangalore, India, but we have folks in the US and the UK. We have a center in Guadalajara, Mexico and couple of other locations in India. My particular organization is close to 100 people.

We've also started to analyze and harvest the data that is out in the open space. It could be blogs, Twitter feeds, or Facebook data. Combining that is what’s bringing real business value.

I have a PhD in pure mathematics, and before that I had an MBA in marketing. It's a little bit of an awkward mix there, and got in into analytics space in mid '90s working for Walmart.

I built out Walmart’s Assortment Planning System in late '90s and then came to HP in 2000 leading an advance data-mining center in Austin, Texas. From there I evolved into doing e-business analytics for few years and then moved to customer knowledge management. I spent five years in IT developing analytics platform.

About year-and-a-half ago, I got an opportunity to lead the big-data practice for this organization called Global Analytics. In five years, they had gone from five people to more than 1,000 people, and that intrigued me a lot. I was able to take the opportunity and move to India to go lead that team.

More insights

Gardner: Pramod, when we look back into this data, do you gain more insights knowing what you're looking for, or not knowing what you're looking for? What kind of insights were the unexpected consequences of your putting together this type of data infrastructure and then applying big-data analytics to it?

Singh: We deal with that day-in and day-out. I’ll give you a couple of examples there. This is something that happened about three or four years ago with HP. We were looking at a problem that was a classic problem in marketing to the US small and medium-sized business organizations (SMBs). We had a fixed budget for marketing, and across the US, there are more than 20 million SMBs. The classic definition of an SMBs is any business with 100-500 employees.

HP had an install base of a small part of that. We realized that particular segment of a SMBs is squeezed between a classic consumer, where you can do mass marketing, such as TV advertising, and an enterprise, where you can actually put bodies, your people who have relationship. SMBs are squeezed in between those two extremes.

The question then became what do we do with that? Again, when you do data mining and analytics, you may not know where this will lead you.

On one hand, you can't reach out to every single one of them. It’s just way too expensive to do that. On the other hand, if you try to go do the marketing, you don’t get the best out of it.

We were starting to work on something like that. I was approached by a vice president in marketing who said revenues are declining and they had a limited marketing budget. They didn’t know what to do.

This is where one of those unexpected things came in. I said, "Let’s see in that install base whether there are different segments of customers that are behaving differently." That led us on kind of a journey where we said, well, "How do we start to do that right? Let’s figure out what are the different attributes of data that I can capture."

On one hand, if you look at SMBs, you can capture who they are, what industry segment they're in, how many employees they have, where are they based, who the CEO is. It's what we call firmographics.

On the other hand, you have classes of data involving their interaction with HP. It could be things like how many PCs or servers they bought, how long ago did they buy it, how much money they spent, the whole transactional aspect of it.

Then, there are some things that are derived attributes. You may be able to derive that in the last one year they came to us four times. What interaction did we have on the website,? For example, did they come to us through a web channel? If they did, how many email offers were sent to them? How many of those were clicked? How many of those converted? Those are the classes of data that we could capture.

The question then became what do we do with that? Again, when you do data mining and analytics, you may not know where this will lead you.

Mathematical modeling

We thought that maybe there are different classes of customers. We pulled our data together and started to do mathematical modeling. There are techniques called clustering, analytical techniques called K-Means, and things like that. We started to get some results and to analyze them. In this type of situation, we have to be careful, because there are some things that may look mathematically correct, but may not have a real business value behind it.

Once we started to look at those things, we went through multiple iterations. We realized that we were not getting segments or clusters that were very distinct. One day, I was driving home in Austin, and I said, "You know what? Who they are I don’t control, but as far as what they're doing with HP we have a reasonably good understanding."

So we started to do clustering based only on those attributes, and that’s where an "aha" moment came. We started to find these clusters, which we call segments, where we eventually found a cluster which was that 7 to 8 percent of the population that brought in 45 percent of revenue.

The marketers started to say that this was a gold mine. That’s what we never expected to happen. We put together a structure. Once we figured out these four or five clusters, we tried to figure out why they were clustered together. What’s common?

Fully experience the HP Vertica analytics platform...

Become a member of myVertica

We built out a primary research thing, where we took a random sample out of each one of those clusters, interviewed those guys, and were able to build a very good profile of what these segments were.

There are 20 million SMBs in US, and we are able to build a model to predict which of these prospects are similar to the clusters we had. That’s where we were able to find customers that looked like our most profitable customers, which we ended up calling Vanguards. That resulted into a tremendous amount of  a dollar increment for HP. It's a good example of what you talked when you find unexpected things.

We just wanted to analyze data. It led us to a journey and ended up finding a customer group we weren't even aware of. Then, we could build marketing strategy to actually go target those and get some value out of it.

Gardner: At the Big Data Conference, I've spoken to other organizations who are creating an analytics capability and then exposing that to as many of their employees as possible, hoping for this very sort of unexpected positive benefit. Is there a way that you're taking your analytics either through visualization or tools and then allowing a larger population within HP to experiment with it?

Singh: We're trying to democratize the analytics as much as we can. One thing we're realizing is that to get the full value, you don't want data to stay in silos. So there are a couple of things you have to do. In terms of building out an ecosystem where you have good set of motivated people and where you can give them a career path, we have created this organization called Global Analytics. You get a critical mass of people who challenge each other, learn from each other, and do lot of analytics.

But also it’s very important that on the consumption side of it, you have people who are analysts and understand analytics and get the best value out of it. So they try to create that ecosystem. We have seen both ends of it.

Good career path

If you just give them to one data miner or analytics person in one team, sometimes the person does not find an ecosystem to challenge himself or herself. We're trying to do it on both sides of the fence, so that we can provide people with a good career path.

Hiring these folks is not easy. Once you've hired them, retaining them is not easy. You want to make sure to create an ecosystem where it’s challenging enough for these people to work. It also has to be an ecosystem where you continually challenge them and keep training them.

The analytical techniques are evolving. When I started doing it, things were stable for years. Now, the newer class of data is coming in, newer techniques are coming in, and newer classes of business problems are coming in. It’s very important that we keep the ecosystem going. So we try to do it on both sides.

Gardner: Very interesting. HP, of course, has its own line of products for big-data analysis. You're such a large global enterprise that you're doing lots of analysis, as any good business should, but you're also being asked to show how this works. Are there some specific use cases that demonstrate for other enterprises what you've learned yourselves.

You want to make sure to create an ecosystem where it’s challenging enough for these people to work. It also has to be an ecosystem where you continually challenge them and keep training them.

Singh: There are several that we can talk about. One is in a social media space. I briefly talked about that. My career evolved of doing analytics in what I call "data inside the enterprise." But, over the last couple of years, we started to go look at data outside the enterprise.

Recently we went and looked at a bank. We were able to harvest data from the Internet, publicly available data like Glassdoor, for example. Glassdoor is a website where employees of a company can put their feedback, talk about the company, and rate things.

We were presenting it to the executives of this particular bank and we were able to get all the data and tell them the overall employee morale. We figured out that the life-work balance for the employees wasn't very good.

The main component that the employees weren't happy about was their leave policy and their vacation policy. We drilled down and figured out that the bankers seemed to be fairly happy, but the IT guys and analysts weren't very happy. Again, this is one example where we didn't ask for a line of data from the customer. This data is publicly available. You and I, or anybody else, can go get it. I can do that same analysis for HP or any other company.

That’s where I believe the classes of analytics we're doing is changing. A lot of times, your competitive differentiator is the ability to do things with that data. Data is a corporate asset and it will be, but this class of what we call the user-generated data is changing analytics as a whole. The ability to go harvest it and, more importantly, get value out of it will be the competitive differentiator.

Gardner: Any other use cases that demonstrate the power of a particular type of platform, let’s say Vertica in HAVEn, where you've got the power of a columnar architecture and you've got the ability to bring in unstructured data from Autonomy? Maybe there are a couple of use cases that demonstrate the unique attributes of HAVEn when it comes to inclusivity and the comprehensive nature of information today?

Game changer

Singh: Let me talk about a couple of the things that happened in the HAVEn ecosystem. One of the main work forces in HAVEn is our massively parallel database called Vertica. In addition to being a database where we can ingest data very quickly, ingest large volumes of data, and run query performance, the game-changer for us as an analytics practitioner for me has been ability to do analytics in database.

If I look at my career over the last 20-22 years, most of the times what happens in the analytics space is that you have data residing in a database or an enterprise data warehouse. When you want to build a model, you take the data out and use an analytics platform like SAS, R, or SPSS. You do something there and you either bring the data back into the environment or you run the models and publish them out.

What Vertica has done that's unique is given us a framework, and through the UDEF framework, we could build a data mining model and run it directly on a database engine and take the output out.

An example we took to HP Discover a couple of months ago was trying to predict a failure of a machine before the actual failure happens. HP has these big machines and big printers, which are very expensive.

Like lot of high-end devices these days, they send out a lot of data. They send out data about when you're using a machine. The sensors send out a lot of information, maybe the pressure of the valves, the kind of the temperature they're in, the kind of throughput they're giving you, or the number of pages you've printed.

Looking at each components of failure, we could predict with a certain probability when the machine will fail and with a certain probability.

Also, they give you data on the events when the machine was not performing optimally or actually failed. We were able to go ingest all that data, put the data onto in the Vertica  platform, and build predictive models using open source R language. We built a model that can predict the failure of a machine.

Looking at each components of failure, we could predict with a certain probability when the machine will fail and with a certain probability, so our service reps can actually be proactive and not wait for the machine to fail. That's one example of doing an in-database data mining using Vertica.

Another example used more components around the social-media space. One of the problems in the social-media space, and I think you guys are probably familiar with this, is finding influencers.

I gave a talk yesterday around figuring out how you do that. There are classical ways if you go by the uni-dimensional thing around the number of followers or retweets you have. Barack Obama or Lady Gaga would be big influencers, but Barack Obama, for cloud computing for HP, may not be a very big influencer.

So you build those classes of algorithms. My team has actually built out three patented algorithms to figure out how to identify influencers in the space. We've actually built out a framework where we can source that data from the social-media space, drop it into a Hadoop kind of an environment.

Fully experience the HP Vertica analytics platform...

Become a member of myVertica

We use Autonomy to enrich and put some sentiments to it and then drop the data into the Vertica environment. In that Vertica environment, you run the compressed algorithms and get an output. Then, you can score and predict who is the influencer for the topic you are looking for.

Influencers

I gave the example of Barack Obama, in general a big influencer, but he is not influencer for all topics. Maybe in politics or the US government he's a big influencer, but not for cloud computing. Influencer is also a function of time. Somebody like Diego Maradona probably was a big influencer in soccer in the ’90s, but in 2014, not that much.

You have to make sure that you can incorporate those as part of the logic of your algorithm. We've been able to use the multiple components of HAVEn and build out a complete framework where we can tell numerically who the main influencers are and how influential they are. For example, if you get a score of 93 and I get a score of 22, you are almost four times as influential as I am.

Gardner: For other organizations that are interested in learning more about how HP Analytics is operating and maybe learning from your example, are there any resources or websites we can go to, where you are providing more information about HP Analytics?

You have to make sure that you can incorporate those as part of the logic of your algorithm.

Singh: Definitely. We work through our partners in Enterprise Services. We have our own website as well. There are multiple ways that you can approach us. You can talk to the Vertica sales team and they can connect to us. As I said, we do analytics for all of HP and for select customers. We do not have a direct sales arm to us. We work through our partners in Enterprise Services, as well as with software team.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  big data  BriefingsDirect  Dana Gardner  data analytics  HAVEn  HP  HP Vertica  HPDiscover  Interarbor Solutions  Pramod Singh 

Share |
PermalinkComments (0)
 

Vichara Technologies grows the market for advanced analytics after cutting its big data teeth on Wall Street

Posted By Dana L Gardner, Tuesday, November 11, 2014

The next BriefingsDirect deep-dive big data benefits case study interview explores how Vichara Technologies in Hoboken, New Jersey is expanding its capabilities in big data from origins on Wall Street into other areas, and thereby demonstrating the growing marketplace for advanced big-data analytics services.

The use of HP Vertica as a big data core component to Vichara has allowed them to extend their easier to use financial modeling and tools, and then apply them to other industries such as insurance and healthcare.

 Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how advanced big data, cloud, and converged infrastructure implementations are expanding the impact and value of rapid and increasingly predictive analytics, BriefingsDirect sat down with Tim Meyer, Managing Director at Vichara Technologies at the recent HP Big Data 2014 Conference in Boston. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us how your organization evolved, and how big data has become such a large part of the marketplace for gaining insights into businesses.

Meyer: The company has its roots in analytics and risk modeling and for all sorts of instruments that are used on Wall Street for predicting prices and valuation of instruments. As the IT infrastructure grew from Excel to databases and eventually to very fast databases, such as Vertica, we realized that there were many problems that couldn't be solved before, and that required way too long a time to answer.

Meyer

Wall Street people measure time in seconds, not in hours. We've found that there's a great value in answering a lot of business intelligence (BI) questions -- especially around valuations and risk models, as well as portfolio management. These are very large portfolios and datasets that have to be analyzed. We think that this is a great use of big-data analytics.

Gardner: How long have you been using Vertica? How did it become a part of your portfolio of services?

Fully experience the HP Vertica analytics platform...

Become a member of myVertica.

Meyer: We've been using Vertica for at least for two years now. It’s one of the early ones, and we recognized it as being one of the very fastest databases. We try to use as many of these components as possible. We really like Vertica for its capabilities.

Risk assessment

Gardner: Tim, this whole notion of risk assessment is of interest to me. I think it's coming to bear on more industries. People are also interested in extending from knowing what has happened to being able to predict, and then better prescribe new efforts and new insights.

Tell me about predictive risk assessment. How do you go about that, and what should other companies understand about that?

Meyer: Risk assessment comes about from starting to look at how prices fluctuate and how interest rates move, and thus create changes in derivatives. What has happened most recently is that a lot of the banks and hedge funds have recognized this. Not only is [predictive risk assessment] a business imperative for them to have that half-percent hedge, but there are also compliance reasons for which they need to predict what their business is going to look like.

There are now more and more demands on stress testing, as well as demands from international banking regulations, such as Basel III, that require that businesses such as hedge funds and banks not just look behind, but ahead at how their business is going to look in a year. So this becomes really very important for a host of reasons even more than just how your business is doing.

Gardner: If I were a business and wanted to start taking advantage of what's now available through big-data analytics -- and at a more compelling price and higher performance than in the past -- what are some of the first steps?

Fully experience the HP Vertica analytics platform...

Become a member of myVertica

Do I need to think about the type of data or the type of risk? How do you go about of recognizing that you can now get the technology to do this at an analytics level, but there is still the needed understanding of how to do it at the process and methodological level?

Meyer: We work very closely with our customers and try to separate algorithmic work from the development work. A lot of our customers have more than a few Caltech and MIT PhDs who do the algorithmic definitions. But all of them still need the engine, the machine with its scripting, and fast capability to build those queries right into the system as quickly as possible.

We usually work with these kinds of people, and it is a bit of a team-work effort. We find that that’s a way to figure out what is our value, and what is the value of our customer. Together, it has turned out to be very good teamwork.

Gardner: And you are a consultancy, as well as a services provider? Do you extend into any hosting or do you have a cloud approach? How do you manage the technology for the consulting and services you offer?

Broader questions

Meyer: We expand from the core products and tools into broader questions for people who want a proof of concept (POC) into this new technology. We build those on an ongoing basis. People, as well, want to look at options such as different performances of clouds. They do vary.

So we take on those kinds of consulting work as well, not to mention that sometimes it expands into back-office compliance and sometimes into billing issues. They all relate to the core business of managing portfolios, but yet they are linked.

Very often, we've done those kinds of projects and we see even more of these possibilities as we see compliance as a bigger issue, such as Dodd-Frank as well as Basel III, in the financial world. But they are really no different than many regulations coming on the healthcare side for paperwork management, for example.

Gardner: So that raises the question of the verticals that you expect first. Where is predictive risk assessment and the analytics requirements for that likely to appear first?

They all relate to the core business of managing portfolios, but yet they are linked

Meyer: One thing we have learned from our experience in financial modeling and tools is that there is always a need for people who are totally unskilled in SQL or other query languages to quickly get answers. Although many people have different takes on this, we think we've found some tools that are unique. And we think that these tools will apply to other industries, most particularly to healthcare.

These are big problems, but we think the way we think of it is to start small with a POC or really defining a very small problem and solving it and not trying to take a bite of the entire elephant, so to speak. We find that to be a much better approach to going into new segments and we'll be looking at both insurance and healthcare as two examples.

Fully experience the HP Vertica analytics platform...

Become a member of myVertica

Gardner: Back to the technology front. Are there any developments in the technology arena that give you more confidence that you can take on any number of data types, information types, and scale and velocity types?

I'm thinking of looking at either cloud or converged infrastructure support of in-memory or columnar architectures. Is there a sense of confidence that no matter what you go to bite off in the market, you have the technology, and the technology partner, to back you up?

Meyer: We're finding that there is much more maturity in a lot of database technologies that are now coming out.

There is always something new on the horizon, but there are, as you said, columnar architectures and so on. These are already here, and we're constantly experimenting with them.

To your point about cloud infrastructure and where that is going, it's the same thing. We see ParAccel, Amazon, and data warehouses such as Redshift showing us the way where a lot of the technology is becoming very prepackaged. The value-add is to talk to the customer and speed up that process of integration.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Business Intelligence  Dana Gardner  data analytics  HP  HPDiscover  Interarbor Solutions  risk assessment  Tim Meyer  Vertica  Vichara 

Share |
PermalinkComments (0)
 

Five ways to make identity management work best across hybrid computing environments

Posted By Dana L Gardner, Wednesday, October 29, 2014

Any modern business has been dealing with identity and access management (IAM) from day one. But now, with more critical elements of business extending beyond the enterprise, access control complexity has been ramping up due to cloud, mobile, bring your own device (BYOD), and hybrid computing.

And such greater complexity forms a major deterrent to secure, governed, and managed control over who and what can access your data and services -- and under what circumstances.The next BriefingsDirect thought leader discussion then centers on learning new best practices for managing the rapidly changing needs around IAM.

While cloud computing gets a lot of attention, those of us working with enterprises daily know that the vast majority of businesses are, and will remain, IT hybrids, a changing mixture of software as a service (SaaS), cloud, mobile, managed hosting models, and of course, on-premises IT systems.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We're here with a Chief Technology Officer for a top IAM technology provider to gain a deeper understanding of the various ways to best deploy and control access management in this ongoing age of hybrid business.

Here to explore five critical tenets of best managing the rapidly changing needs around identity and access management is Darran Rolls, Chief Technology Officer at SailPoint Technologies in Austin, Texas. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There must be some basic, bedrock principles that we can look to that will guide us as we're trying to better manage access and identity.

Rolls: Absolutely, there are, and I think that will be a consistent topic of our conversation today. It's something that we like to think of as the core tenets of IAM. As you very eloquently pointed out in your introduction, this isn't anything new. We've been struggling with managing identity and security for some time. The changing IT environment is introducing new challenges, but the underlying principles of what we're trying to achieve have remained the same. 

http://www.sailpoint.com/about-us/executive-team/darran-rolls

Rolls

The idea of holistic management for identity is key. There's no question about that, and something that we'll come back to is this idea of the weakest link -- a very commonly understood security principle. As our environment expands with cloud, mobile, on-prem, and managed hosting, the idea of a weak point in any part of that environment is obviously a strategic flaw.

As we like to say at SailPoint, it’s an anywhere identify principle. That means all people -- employees, contractors, partners, customers, basically from any device, whether you’re on a desktop, cloud, or mobile to anywhere. That includes on-prem enterprise apps, SaaS apps, and mobile. It’s certainly our belief that for any IAM technology to be truly effective, it has to span all for all -- all access, all accounts, and all users; wherever they live in that hybrid runtime.

Gardner: So we're in an environment now where we have to maintain those bedrock principles for true enterprise-caliber governance, security, and control, but we have a lot more moving parts. And we have a cavalcade of additional things you need to support, which to me, almost begs for those weak links to crop up.

So how do you combine the two? How do you justify and reconcile these two realities -- secure and complex?

Addressing the challenge

Rolls: One way comes from how you address the problem and the challenge. Quite often, I'm asked if there's a compromise here. If I move my IAM to the cloud, will I still be able to sustain my controls and management and do risk mitigation, which is what we were trying to get to.

My advice is if you're looking at an identity-as-a-service (IDaaS) solution that doesn’t operate in terms of sustainable controls and risk mitigation, then stop, because controls and risk mitigation really are the core tenets of identity management. It’s really important to start a conversation around IDaaS by quite clearly understanding what identity governance really is.

This isn’t an occasional, office-use application. This is critical security infrastructure. We very much have to remember that identity sits at the center of that security-management lifecycle, and at the center of the users’ experience. So it’s super important that we get it right.

So in this respect, I like to think that IDaaS is more of a deployment option than any form of a compromise. There are a minimum set of table stakes that have to be in place. And, whether you're choosing to deploy an IDaaS solution or an on-prem offering, there should be no compromise in it.

We have to respect the principles of global visibility and control, of consistency, and of user experience. Those things remain true for cloud and on-prem, so the song remains the same, so to speak. The IT environment has changed, and the IAM solutions are changing, but the principles remain the same.

Gardner: I was speaking with some folks leading up to the recent Cloud Identity Summit, and more and more, people seem to be thinking that the IAM is the true extended enterprise management. It's more than just the identity in access, but across services and so essential for extended enterprise processes.

Being more inclusive means that you need to have the best of all worlds. You need to be able to be doing well on-premises as well as in the cloud, and not either/or.

Also, to your point, being more inclusive means that you need to have the best of all worlds. You need to be able to be doing IAM well on-premises, as well as in the cloud -- and not either/or.

Rolls: Most of the organizations that I speak to these days are trying to manage a balance between being enterprise-ready -- so supporting controls and automation and access management for all applications, while being very forward looking, so also deploying that solution from the cloud for cost and agility reasons. 

For these organizations, choosing an IDaaS solution is not a compromise in risk mitigation, it’s a conscious direction toward a more off-the-shelf approach to managing identity. Look, everyone has to address security and user access controls, and making a choice to do that as a service can’t compromise your position on controls and risk mitigation.

Gardner: I suppose the risk of going hybrid is that if you have somewhat of a distributed approach to your IAM capabilities, you'll lose that all-important single view of management. I'd like to hear more, as we get into these tenets, of how you can maintain that common control.

You have put in some serious thought into making a logical set of five tenets that help people understand and deal with these changeable markets. So let’s start going through those. Tell me about the first tenet, and then we can dive in and maybe even hear an example of where someone has done this right.

Focusing on identity

Rolls: Obviously it would be easy to draw 10 or 20, but we like to try and compress it. So there's probably always the potential for more. I wouldn’t necessarily say these are in any specific order, but the first one is the idea of focusing on the identity and not the account.

This one is pretty simple. Identities are people, not accounts in an on-line system. And something we learned early in the evolution of IAM was that in order to gain control, you have to understand the relationships between people -- identities, and their accounts, and between those accounts and the entitlements and data they give access, too.

So this tenet really sits at the heart of the IAM value proposition -- it's all about understanding who has access to what, and what it really means to have that access. By focusing on the identity -- and capturing all of the relationships it has to accounts, to systems, and to data -- that helps map out the user security landscape and get a complete picture of how things are configured.

Gardner: If I understand this correctly, all of us now have multiple accounts. Some of them overlap. Some of them are private. Some of them are more business-centric. As we get into the Internet of Things, we're going to have another end-point tier associated with a user, or an identity, and that might be sensors or machines. So it’s important to maintain the identity focus, rather than the account focus. Did I get that right?

Rolls: We see this today in classic on-prem infrastructure with system-shared and -privileged accounts. They are accounts that are operated by the system and not necessarily by an individual. What we advocate here, and what leads into the second tenet as well, is this idea of visibility. You have to have ownership and responsibility. You assign and align the system and functional accounts with people that can have responsibility.

The consequences of not understanding and accurately managing those identity and account relationships can be pretty significant.

In the Internet of Things, I would by no means say that it's nothing new, because if nothing else, it's potentially a new order of scale. But it's functionally the same thing: Understanding the relationships.

For example, I want to tie my Nest account back to myself or to some other individual, and I want to understand what it means to have that ownership. It really is just more of the same, and those principles that we have learned in enterprise IAM are going to play out big time when everything has an identity in the Internet of Things.

Gardner: Any quick examples of tenet one, where we can identify that we're having that focus on the user, rather than the account, and it has benefited them?

Rolls: For sure. The consequences of not understanding and accurately managing those identity and account relationships can be pretty significant. Unused and untracked accounts, something that we commonly refer to in the industry as "orphan accounts," often lead to security breaches. That’s why, if you look at the average identity audit practice, it’s very focused on controls for those orphan accounts.

We also know for a fact, based on network forensic analysis that happens post-breach, that in many of the high-profile, large-scale security breaches that we've seen over the last two to five years, the back door is left open by an account that nobody owns or manages. It’s just there. And if you go over to the dark side and look at how the bad guys construct vulnerabilities, first things they look for are these unmanaged accounts.

So it’s low-hanging fruit for IAM to better manage these accounts because the consequences can be fairly significant.

Tenet two

Gardner: Okay, tenet two. What’s next on your priority list?

Rolls: The next is two-fold. Visibility is king, and silos are bad. This is really two thoughts that are closely related.

The first part is the idea that visibility is king, and this comes from the realization that you have to be able to capture, model, and visualize identity data before you have any chance of managing it. It’s like the old saying that you can’t manage what you can’t measure.

It’s same thing for identity. You can’t manage the access and security you don’t see, and what you don’t see is often what bites you. So this tenet is the idea that your IAM system absolutely must support this idea of rapid, read-only aggregation of account and entitlement information as a first step, so you can understanding the landscape.

The second part is around the idea that silos of identity management can be really, really bad. A silo here is a standalone IAM application or what one might think of as a domain-specific IAM solution. These are things like an IDaaS offering that only does cloud apps or an Active Directory-only management solution, basically any IAM tool that creates a silo of process and data. This isolation goes against the idea of visibility and control that we just covered in the first tenant.

In education, we say "no child left behind." In identity, we say “no account left behind, and no system left behind.”

You can’t see the data if its hidden in a siloed system. It’s isolated and doesn't give you the global view you need to manage all identity for all users. As a vendor, we see some real-world examples of this. SailPoint just replaced a legacy-provisioning solution at a large US based bank, for example, because the old system was only touching 12 of their core systems.

The legacy IAM system the bank had was a silo managing just the Unix farm. It wasn't integrated and its data and use case wasn’t shared. The customer needed a single place for their users to go to get access, and a single point of password control for their on-prem Unix farm, and for their cloud-based, front-end application. So today SailPoint’s IdentityNow provides that single view for them, and things are working much better.

Gardner: It also reminds me that we need to be conscious of supporting the legacy in the older systems, recognizing that they weren't designed necessarily for the reality we're in now. We also need to be flexible in the sense of being future-proof. So it's having visibility across your models that are shifting in terms of hybrid and cloud, but also visibility across the other application sets and platforms that were never created with this mixture of models that we are now supporting.

Rolls: Exactly right. In education, we say "no child left behind." In identity, we say “no account left behind, and no system left behind.” We also shouldn’t forget there is a cost associated with maintaining those siloed IAM tools, too. If the system only supports cloud, or only supports on-prem, or managing identity for mobile, SaaS, or just one area of the enterprise -- there’s cost. There's a real dollar cost for buying and maintaining the software, and probably more importantly, a soft cost in the end-user experience for the people that have to manage across those silos. So these IAM silos are not only preventing visibility and controls, but there is big cost here, a real dollar cost to the business, as well.

Gardner: This gets closer to the idea of a common comprehensive view of all the data and all the different elements of what we are trying to manage. I think that's also important.

Okay, number three. What are we looking at for your next tenet, and what are the ways that we can prevent any of that downside from it?

Complete lifecycle

Rolls: This tenet comes from the school of identity hard knocks, and is something I’ve learned from being in the IAM space for the past 20 or so years -- you have to manage the complete lifecycle for both the identity, and every account that the identity has access to.

Our job in identity management, our “place” if you will in the security ecosystem, is to provide cradle-to-grave management for corporate account assets. It's our job to manage and govern the full lifecycle of the identity -- a lifecycle that you’ll often hear referred to as JML, meaning Joiners, Movers and Leavers.

As you might expect, when gaps appear in that JML lifecycle, really bad things start to happen. Users don’t get the system access they need to get their jobs done, the wrong people get access to the wrong data and critical things get left behind when people leave.

Maybe the wrong people get access to the wrong data. They're in the Move phase. Then things get left behind when people leave. You have to track the account through that JML lifecycle. I avoid using the term "cradle to grave," but that’s really what it means.

That’s a very big issue for most companies that we talked to. It’s captured in that lifecycle.

In general, worker populations are becoming more transient and work groups more dynamic.

Gardner: So it’s not just orphan accounts, but it’s inaccurate or outdated accounts that don’t have the right and up-to-date information. Those can become back doors. Those can become weak links.

It appears to me, Darran, that there's another element here in how our workplace is changing. We're seeing more and more of what they call "contingent workforces," where people will come in as contractors or third-party suppliers for a brief period of time, do a job, and get out.

It’s this lean, agile approach to business. This also requires a greater degree of granularity and fine control. Do you have any thoughts about how this new dynamic workforce is impacting this particular tenet?

Rolls: It’s certainly increasing the pressure on IT to understand and manage all of its population of users, whether they're short-term contractors or long-term employees. If they have access to an asset that the business owns, it’s the business's fiduciary duty to manage the lifecycle for that worker.

In general, worker populations are becoming more transient and work groups more dynamic. Even if it’s not a new person joining the organization, we’re creating and using more dynamic groups of people that need more dynamic systems access.

It’s becoming increasingly important for businesses today to be able to put together the access that people need quickly when a new project starts and then accurately take it away when the project finishes. And if we manage that dynamic access without a high degree of assured governance, the wrong people get to the wrong stuff, and valued things get left behind.

Old account

Quite often, people ask me if it would really matter when the odd account gets left behind, and my answer usually is: It certainly can. A textbook example of this when a sales guy leaves his old company, goes to join a competitor, and no one takes away his salesforce.com account. He's then spends the next six months dipping into his old company’s contacts and leads because he still has access to the application in the cloud.

This kind of stuff happens all the time. In fact, we recently replaced another IDaaS provider at a client on the West Coast, specifically because “the other vendor” -- who shall remain nameless -- only did just-in-time SAML provisioning, with no leaver-based de-provisioning. So customers really do understand this stuff and recognize the value. You have to support the full lifecycle for identity or bad things happen for the customer and the vendor.
Gardner: All right. We were working our way through our tenets. We're now on number four. Is there a logical segue between three and four? How does four fit in?

Rolls: Number four, for me, is all about consistency. It talks to the fact that we have to think of identity management in terms of consistency for all users, as we just said, from all devices and accessing all of our applications.

Practically speaking, this means that whether you sit with your Windows desktop in the office, or you are working from an Android tablet back at the house, or maybe on your smartphone in a Starbucks drive-through, you can always access the applications that you need. And you can consistently and securely do something like a password reset, or maybe complete a quarterly user access certification task, before hitting the road back to the office.

It’s very easy to think of consistency as just being in the IAM UI or just in the device display, but it really extends to the identity API as well.

Consistency here means that you get the same basic user experience, and I use the term user experience here very deliberately, and the same level of identity service, wherever you are. It has become very, very important, particularly as we have introduced a variety of incoming devices, that we keep our IAM services consistent.

Gardner: It strikes me that this consistency has to be implemented and enforced from the back-end infrastructure, rather than the device, because the devices are so changeable. We're even thinking about a whole new generation of devices soon, and perhaps even more biometrics, where the device becomes an entry point to services.

Tell me a bit about the means by which consistency can take place. This isn't something you build into the device necessarily.

Rolls: Yes, that consistency has to be implemented in the underlying service, as you’ve highlighted. It’s very easy to think of consistency as just being in the IAM UI or just in the device display, but it really extends to the identity API as well. A very good example to explore this concept of consistency of the API, is to think like a corporate application developer and consider how they look at consistency for IAM, too.

Assume our corporate application developer is developing an app that needs to carry out a password reset, or maybe it needs to do something with an identity profile. Does that developer write a provisioning connector themselves? Or should they implement a password reset in their own custom code?

The answer is, no, they don’t roll their own. Instead they should make use of the consistent API-level services that the IAM platform provides -- they make calls to the IDaaS service. The IDaaS service is then responsible for doing the actual password reset using consistent policies, consistent controls, and a consistent level of business service. So, as I say, its about consistency for all use cases, from all devices, accessing all applications.

Thinking about consistency

Gardner: And even as we think about the back-end services support, that itself also needs to extend to on-prem legacy, and also to cloud and SaaS. So we're really thinking about consistency deep and wide.

Rolls: Precisely, and if we don’t think about consistency for identity as a services, we're never going to have control. And importantly, we're never going to reduce the cost of managing all this stuff, and we're never going to lower the true risk profile for the business.

Gardner: We're coming up or our last tenet, number five. We haven't talked too much about the behavior, the buy-in. You can lead a horse to water, but you can't make him drink. This, of course, has an impact on how we enforce consistency across all these devices, as well as the service model. So what do we need to do to get user buy-in? How does number five affect that?

Rolls: Number five, for me, is the idea that the end-user experience for identity is everything. Once upon a time, the only user for identity management was IT itself and identity was an IT tool for IT practitioners. It was mainly used by the help desk and by IT pros to automate identity and access controls. Fortunately, things have changes a lot since then, both in the identity infrastructure and, very importantly, in the end users’ expectations.

The expectation is to move the business user to self service for pretty much everything, and that very much includes Identity Management as a Service as well.

Today, IAM really sits front and center for the business users IT experience. When we think of something like single sign-on (SSO), it literally is the front door to the applications and the services that the business is running. When a line-of-business person sits down at an application, they're just expecting seamless access via secured single sing-on. The expectation is that they can just quickly and easily get access to the things they need to get their job done.

They also expect identity-management services, like password management, access request, and provisioning to be integrated, intuitive, and easy to use. So the way these identity services are delivered in the user experience is very important.

Pretty much everything is self-service these days. The expectation is to move the business user to self-service for pretty much everything, and that very much includes Identity Management as a Service (IDaaS) as well. So the UI just has to be done right and the overall users’ experience has to be consistent, seamless, intuitive, and just easy to deal with. That’s how we get buy-in for identity today, by making the identity management services themselves easy to use, intuitive, and accessible to all.

Gardner: And isn’t this the same as saying making the governance infrastructure invisible to the end user? In order to do that, you need to extend across all the devices, all the deployment models, and the APIs, as well as the legacy systems. Do you agree that we're talking about making it invisible, but we can’t do that unless you're following the previous four tenets?

Rolls: Exactly. There's been a lot of industry conversation around this idea of identity being part of the application and the users’ flow, and that’s very true. Some large enterprises do have their own user-access portals, specific places that you go to carry out identity-related activities, so we need integration there. On the other hand, if I'm sitting here talking to you and I want to reset my Active Directory password, I just want to pick up my iPhone and do it right there, and that means secure identity API’s.

We talked a good amount about the business user experience. It is very important to realize that it’s not just about the end-user and the UI. It also affects how the IDaaS service itself is configured, deployed, and managed over time. This means the user experience for the system owner, be that someone in IT or in the line of business -- it doesn’t really matter who -- has to be consistent and easy to use and has to lead to easier configuration, faster deployment, and faster time-to-value. We do that by making sure that the administration interface and the API’s that support it are consistent and generally well thought out, too.

Intersect between tenets

Gardner: I can tell, Darran, that you've put an awful lot of thought into these tenets. You've created them with some order, even though they're equally important. This must be also part of how you set about your requirements for your own products at SailPoint.

Tell me about the intersect between these tenets, the marketplace, and what SailPoint is bringing in order to ameliorate the issues that the problem side of these tenets identify, but also the solution side, in terms of how to do things well.

Rolls: You would expect every business to say these words, but they have great meaning for us. We're very, very customer focused at SailPoint. We're very engaged with our customers and our prospects. We're continually listening to the market and to what the buying customer wants. That’s the outside-in part of the of the product requirements story, basically building solutions to real customer problems.

Internally, we have a long history in identity management at SailPoint. That shows itself in how we construct the products and how we think about the architecture and the integration between pieces of the product. That’s the inside-out part of the product requirements process, building innovative products that solutions that work well over time.

As SailPoint has strategically moved into the IDaaS space, we’ve brought with us a level of trust, a breadth of experience, and a depth of IAM knowledge.

So I guess that all really comes down to good internal product management practices. Our product team has worked together for a considerable time across several companies. So that’s to be expected. It's fair to say that SailPoint is considered by many in the industry as the thought leader on identity governance and administration. We now work with some of the largest and most trusted brand names in the world, helping them provide the right IAM infrastructure. So I think we’re getting it right.

As SailPoint has strategically moved into the IDaaS space, we’ve brought with us a level of trust, a breadth of experience, and a depth of IAM knowledge that shows itself in how we use and apply these tenets of identity in the products and the solutions that we put together for our customers.

Gardner: Now, we talked about the importance of being legacy-sensitive, focusing on what the enterprise is and has been and not just what it might be, but I'd like to think a little bit about the future-proofing aspects of what we have been discussing.

Things are still changing and, as we said, there are new generations of mobile devices, more biometrics perhaps doing away with passwords and identifying ourselves through the device that then needs to filter back throughout the entire lifecycle of IAM implications and end points.

So when you do this well, if you follow the five tenets, if you think about them and employ the right infrastructure to support governance in IAM for both the old and the new, how does that set you up to take advantage of some of the newer things? Maybe it’s big data, maybe it’s hybrid cloud, or maybe it's agile business.

It seems to me that there's a virtuous adoption benefit that when you do IAM well.

Changes in technologies

Rolls: As you've highlighted, there are lots of new technologies out there that are effecting change in corporate infrastructure. In itself, that change isn’t new. I came into IT with the advent of distributed systems. We were going to replace every mainframe. Mainframes were supposed to be dead, and it's kind of interesting that they're still here.

So infrastructure change is most definitely accelerating, and the options available for the average IT business these days -- cloud, SaaS and on-prem -- are all blending together. That said, when you look below the applications, and look at the identity infrastructure, many things remain the same. Consider a SaaS app like Salesforce.com. Yes, it’s a 100 percent SaaS cloud application, but it still has an account for every user.

I can provide you with SSO to your account using SAML, but your account still has fine-grained entitlements that need to be provisioned and governed. That hasn’t changed. All of the new generation of cloud and SaaS applications require IAM. Identity is at the center of the application and it has to be managed. If you adopt a mature and holistic approach to that management you are in good stead.

If you're not on board, you'd better get on board, because the challenges for identity are certainly not going away.

Another great example are the mobile device management (MDM) platforms out there -- a new piece of management infrastructure that has come about to manage mobile endpoints. The MDM platforms themselves have identity control interfaces. Its our job in IAM to connect with these platforms and provide control over what’s happening to identity on the endpoint device, too.

Our job in identity is to manage identity lifecycles where ever they sit in the infrastructure. If you're not on board, you'd better get on board, because the challenges for identity are certainly not going away.

Interestingly, I'm sometimes challenged when I make a statement like that. I’ll often get the reply that "with SAML single sign-on, the the passwords go away so the account management problem goes away, right?” The answer is that no, they don’t. They're still accounts in the application infrastructure. So good best practice identity and access management will remain key as we keep moving forward.

Gardner: And of course as you pointed out earlier, we can expect the scale of what's going to be involved here to only get much greater.

Rolls: Yes, 100 percent. Scale is key to architectural thinking when you build a solution today, and we're really only just starting to touch where scale is going to go.

It’s very important to us at SailPoint, when we build our solutions, that the product we deliver understands the scale of business today and the scale that is to come. That affects how we design and integrate the solutions, it affects how they are configured and how they are deployed. It’s imperative to think scale -- that’s certainly something we do.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SailPoint Technologies.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  Darran Rolls  iam  IDaaS  Identity and access management  Identity as a service  Interarbor Solutions  Sailpoint 

Share |
PermalinkComments (0)
 

Large Russian bank, Otkritie Bank, turns to big data analysis to provide real-time financial insights

Posted By Dana L Gardner, Friday, October 24, 2014
Updated: Monday, October 27, 2014

The next BriefingsDirect deep-dive big data benefits case study interview explores how Moscow-based Otkritie Bank, one of the largest private financial services groups in Russia, has built out a business intelligence (BI) capability for wholly new business activity monitoring (BAM) benefits.

The use of HP Vertica as a big data core to the BAM infrastructure provides Otkritie Bank improved nationwide analytics and a competitive advantage through better decision-making based on commonly accepted best information that's updated in near real-time.


Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about Otkritie Bank's drive for improved instant analytics, BriefingsDirect sat down with Alexei Blagirev, Chief Data Officer at Otkritie Bank, at the recent HP Big Data 2014 Conference in Boston. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about your choice for BI platforms. 

Blagirev: Otkritie Bank is a member of the Open Financial Corporation (now Otkritie Financial Corporation Bank), which is one of the largest private financial services groups in Russia. The reason we selected HP Vertica was that we tried to establish a data warehouse that could provide operational data storage and could also be an analytical OLAP solution. 

Blagirev

It was a very hard decision. We tried to refer to the past experience from our team, from my side, etc. Everyone had some negative experience with different solutions like Oracle, because there was a big constraint.

We cannot integrate operational data storage and OLAP solutions. Why? Because there should be high transactional data put in the data warehouse (DWH), which in every case, was usually the biggest constraint to build high-transactional data storage.

Vertica was a very good solution that removed this constraint. While selecting Vertica, we were also evaluating different solutions like IBM. We identified advantages of Vertica against IBM from two different perspectives.

One was performance. The second was that Vertica is cost-efficient. Since we were comparing Netezza (now part of IBM), we were comparing not only software, but also software plus hardware. You can’t build a cluster of Netezza custom-size. You can only build it with 32 terabytes, and so on.

Very efficient

We were also limited by the logistics of these buildings blocks, the so-called big green box of Netezza. In terms of Vertica, it's really efficient, because we can use any hardware.

So we calculated our total cost of ownership (TCO) on a horizon of five years, and it was lower than if we built the data warehouse with different solutions. This was the reason we selected Vertica.

Fully experience the HP Vertica analytics platform ...

Become a member of myVertica.

From the technical perspective and from the cost-efficient perspective, there was a big difference in the business case. Our bank is not a classical bank in the Russian market, because in our bank the technology team leads the innovation, and the technology team is actually the influence-maker inside the business.

So, the business was with us when we proposed the new data warehouse. We proposed to build the new solution to collect all data from the whole of Russia and to organize via a so-called continuous load. This means that within the day, we can show all the data, what’s going on with the business operations, from all line of business inside all of Russia. It sounds great.

When we were selecting HP Vertica, we selected not only Vertica, but the technical bundle. We also hosted the Replicator. We chose Oracle GoldenGate.

We selected the appropriate ETL tool, and the BI front end. So all together, it was a technical bundle, where Vertica was the middleware technical solution. So far, we have build a near-real-time DWH, but we don’t call it near-real-time; we call it "just-in-time, because we want to be congruent with the decision-making process. We want to influence the business to let them think more about their decisions and about their business processes.

Everything appears really quick and it's actually influencing business to make decisions, to think more, and to think fast.

As of now, I can show all data collected and put inside the DWH within 15 minutes and show the first general process in the bank, the process of the loan application. I can show the number of created applications, plus online scoring and show how many customers we have at that moment in each region, the amounts, the average check, the approval rate, and the booking rate. I can show it to the management the same day, which is absolutely amazing.

The tricky part is what the business will do with this data. It's tricky, because the business was not ready for this. The business was actually expecting that they could run a script, go to the kitchen, make a coffee, and then come back.

But, boom, everything appears really quickly, and it's actually influencing the business to make decisions, to think more, and to think fast. This, I believe, is the biggest challenge, to grow business analytics inside the business for those who will be able to use this data.

As of now, we are setting the pilot stage, the pilot phase of what we call business activity monitoring (BAM). This is actually a funny story, because this is the same term referenced in Russia to Baikal-Amur Mainline (BAM), a huge railroad across the whole country that connects all the cities. It's kind of our story, too; we connect all departments and show the data in near real-time.

Next phase

In this case, we're actually working on the next phase of BAM, and we're trying to synchronize the methodology across all products, across all departments, which is very hard. For example, approval rates could be calculated differently for the credit cards or for the cash loans because of the process.

Since we're trying to establish a BI function almost from ground zero, HP Vertica is only the technical side. We need to think more about the educational side, and we need to think about the framework side. The general framework that we're trying to follow, since we're trying to build a BI function, is a United Business Glossary (or accepted services directory), first of all.

It's obvious to use Business Glossary and to use a single term to refer to the same entity everywhere. But it is not happening as of now, because the business unit is still trying to use different definitions. I think it's a common problem everywhere in the business.

The second is to explain that there are two different types of BI tools. One is BI for the data mart, a so-called regular report. Another tool is a data discovery tool. It's the tool for the data lab (i.e. mining tool).

Fully experience the HP Vertica analytics platform ...

Become a member of myVertica.

So we differentiate data lab from data mart. Why? Because we're trying to build a service-oriented model, which in the end produces analytical services, based on the functional map.

When you're trying to answer the question using some analytics, actually it is a regular question, this is tricky. All the questions that are raised by the business, by any business analyst, are regular questions; they are fundamental. 

The correct way to develop an analytical service is to collect all these questions into kind of a question library. You can call it a functional map and such, but these questions, define the analytical service for those functions.

For example, if you're trying to produce cost control, what kind of business questions do you want to answer? What kind of business analytics or metrics do you want to bring to the end-users? Is this really mapped to the question raised, or you are trying to present different analytics? As of now, we feel it's difficult to present this approach. And this is the first part.

The second part is a data lab for ad hoc data discovery. When, for example, you're trying to produce a marketing campaign for the customers, trying to produce customer segments, trying to analyze some great scoring methodology, or trying to validate scientific expectations, you need to produce some research.

It's not a regular activity. It's more ad hoc analysis, and it will use different tools for BI. You can’t combine all the tools and call it a universal BI tool, because it doesn't work this way. You need to have a different tool for this.

Creating a constraint

This will create a constraint for the business users, because they need some education. In the end, they need to know many different BI tools.

This is a key constraint that we have now, because end-users are more satisfied to work with Excel, which is great. I think it's the most popular BI data discovery tool in the world, but it has its own constraints.

I love Microsoft. Everyone loves Microsoft, but there are different beautiful tools like TIBCO Spotfire, for example, which combines MATLAB, R, and so on. You can input models of SAS and so on. You can also write the scripts inside it. This is a brilliant data discovery tool.

But try to teach this tool to your business analyst. In the beginning, it's hard, because it's like a J curve. They will work through the valley of despair, criticizing it. "Oh my God, what are you trying to create, because this is a mess from my perspective?" And I agree with them in the beginning, but they need to go through this valley of despair, because in the end, there will be really good stuff. This is because of the cultural influence.

This will create a constraint for the business users, because they need some education. In the end, they need to know many different BI tools.

Gardner: Tell me, Alexei, what sort of benefits have you been able to demonstrate to your banking officials, since you've been able to get this near real-time, or just-in-time analytics -- other than the fact that you're giving them reports? Are there other paybacks in terms of business metrics of success?

Blagirev: First of all, we differentiate our stakeholders. We have top management stakeholders, which is the board. There are the middle-level stakeholders, which are our regional directors.

I'll start from the bottom, and the regional directors. They just open the dashboard. They don’t click anything or refresh. They just see that they have data and analytics, what’s going on in their region.

They don’t care about the methodology, because there is BAM, and they just use figures for decision making. You don’t think about how it got there, but you think about what to do with these figures. You focus more on your decision, which is good.

They start to think more on their decision and they start to think more on the processing side. We may show, for example, that at 12 o’clock our stream of cash loan applications went down. Why? I have no idea. Maybe they all went out for dinner. I don’t know.

But nobody says that. They say, "Alexei, something is happening." They see true figures and they know they are true figures. They have instruments to exercise operational excellence. This is the first benefit.

Top management

The second, is top management. We had a management board where everyone came and showed different figures. We'd spend 30 minutes, or maybe hour, just debating which figures were true. I think this is a common situation in Russian banks, and maybe not only banks.

Now, we can just open the report, and I say, "This is a single report, because it shows intra-day figures and shows this metrics, it was calculated according to methodology." We actually linked the time of calculation, which shows that this KPI, for example, was calculated at 12 o’clock. You can take figures at 12 o’clock, and if you don’t believe them, you can ask the auditors to repeat calculation, and it will be the same way.

Nobody cares about how to calculate the figures. So they started to think about what methodology to apply to the business process. Actually, this is reverse of the focus from the outside, focusing on what’s going on with our business process. This is the second benefit.

Gardner: Any other advice that you would give to organizations who are beginning a process toward BI?

Try to disclose all your company and software vision, because Vertica or other BI tools are only a part. Try to see all the company's lines, all information.

Blagirev: First of all, don’t be afraid to make mistakes. It's a big thing, and we all forget that, but don’t be afraid. Second, try to create your own vision of strategy for at least one year.

Third, try to disclose all your company and software vision, because HP Vertica or other BI tools are only a part. Try to see all the company's lines, all information, because this is important. You need to understand where the value is, where is the shareholder value is lost, or are you creating the value for the shareholder. If the answer is, yes, don’t be afraid to protect your decision and your strategy, because otherwise in the end, there will be problems. Believe me.

As Gandhi mentioned, in the beginning everyone laughs, then they begin hating you, and in the end, you win.

Gardner: With your business activity monitoring, you've been able to change business processes, influence the operations, and maybe even the culture of the organization, focusing on the now and then the next set of processes. Doesn’t this give you a competitive advantage over organizations that don’t do this?

Blagirev: For sure. Actually, this gives a competitive advantage, but this competitive advantage depends on the decision that you're making. This actually depends on everyone in the organization.

Understanding this brings a new value to the business, but this depends on the final decision from people who sit in the position. Now, those people understand. They're actually handling the business and they see how they're handling the business.

Fully experience the HP Vertica analytics platform ...

Become a member of myVertica.

I can compare the solution to other banks. I have been working for Société Générale and for the Alfa-Bank, which is the largest bank in Russia. I've been the auditor of financial services in PwC. I saw the different reporting and different processes, and I can say that this solution is actually unique in the market.

Why? It shows congruent information in near real-time, inside the day, for all the data, for the whole of Russia. Of course, it brings benefit, but you need to understand how to use it. If you don’t understand how to use this benefit, it's going to be just a technical thing.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  Alexei Blagirev  big data  BriefingsDirect  Dana Gardner  data analysis  HP  HP Vertica  HPDiscover  Interarbor Solutions  OpenBank 

Share |
PermalinkComments (0)
 

A practical guide to rapid IT Service Management as a foundation for overall business agility

Posted By Dana L Gardner, Wednesday, October 22, 2014

The next BriefingsDirect thought leadership panel discussion centers on how rapidly advancing IT service management (ITSM) capabilities form a bedrock business necessity, not just an IT imperative.

Businesses of all stripes rate the need to move faster as a top priority, and many times, that translates into the need for better and faster IT projects. But traditional IT processes and disjointed project management don't easily afford rapid, agile, and adaptive IT innovation.

The good news is that a new wave of ITSM technologies and methods allow for a more rapid ITSM adoption -- and that means better rapid support of agile business processes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To deeply explore a practical guide to fast ITSM adoption as a foundation for overall business agility the panel consists of John Stagaman, Principal Consultant at Advanced MarketPlace based in Tampa, Florida; Philipp Koch, Managing Director of InovaPrime, Denmark, and Erik Engstrom, CEO of Effectual Systems in Berkeley, California. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: John Stagaman, let me start with you. We hear a lot, of course, about the faster pace of business, and cloud and software as a service (SaaS) are part of that. What, in your mind, are the underlying trend or trends that are forcing IT's hand to think differently, behave differently, and to be more responsive?

Stagaman: If we think back to the typical IT management project historically, what happened was that, very often, you would buy a product. You would have your requirements and you would spend a year or more tailoring and customizing that product to meet your internal vision of how it should work. At the end of that, it may not have resembled the product you bought. It may not have worked that well, but it met all the stakeholders’ requirements and roles, and it took a long time to deploy.

Stagaman

That level of customization and tailoring resulted in a system that was hard to maintain, hard to support, and especially hard to upgrade, if you had to move to a new version of that product down the line. So when you came to a point where you had to upgrade, because your current version was being retired or for some other reason, the cost of maintenance and upgrade was also huge.

It was a lesson learned by IT organizations. Today, saying that it will take a year to upgrade, or it will take six months to upgrade, really gets a response. Why should it? There's been a change in the way it’s approached with most of the customers we go on-site to now. Customers say we want to use out of box, it used to be, we want to use out of box, and sometimes it still happens that they say, and here’s all the things we want that are not out of box.

But they've gotten much better at saying they want to start from out of box, leverage that, and then fill in the gaps, so that they can deploy more quickly. They're not opening the box, throwing it away, and building something new. By working on that application foundation and extending where necessary, it makes support easier and it makes the upgrade path to future versions easier.

Moving faster

Gardner: It sounds like moving toward things like commodity hardware and open-source projects and using what you can get as is, is part of this ability to move faster. But is it the need to move faster that’s driving this or the ability to reduce customization? Is it a chicken and egg? How does that shape up?

Unleash the power of your user base ...
Learn how to use big data for proactive problem solving

with a free white paper 

Engstrom: I think that the old use case of "design, customize, and implement" is being forced out as an acceptable approach, because SaaS, platform as a service (PaaS), and the cloud are driving the ability for stakeholders. Stakeholders are retiring, and fresher sets of technologies and experiences are coming in. These two- and three-year standup projects are not acceptable.

Engstrom

If you're not able to do fast time-to-value, you're not going to get funding. Funding isn’t in the $8 million and $10 million tranches anymore; it’s in the $200,000 and $300,000 tranche. This is having a direct effect on on-premise tools, the way the customers are planning, and OPEX versus CAPEX.

Gardner: Philipp, how do you come down on this? Is this about doing less customization or doing customization later in the process and, therefore, more quickly?

Koch: I don't think it's about the customization element in itself. It is actually more that, in the past, customers reacted. They said they wanted to tailor the tool, but then they said they wanted this and they took the software off the shelf and started to rebuild it.

Now with the SaaS tool offerings coming into play, you can’t do that anymore. You can't build your ITSM solution from scratch. You want be able to take it according to use case and adjust it with customization or configuration. You don’t want to be able to tailor.

Koch

But customization happens while you deploy the project and that has to happen in a faster way. I can only concur with all the other things that have already been said. We don't have huge budgets anymore. IT, as such, never had huge budgets, but, in the past, it was accepted that a project like this took a long time to do. Nowadays, we want to have implementations of weeks. We don’t want to have implementations of months anymore.

Gardner: Let’s just unpack a little bit the relationship between ITSM and IT agility. Obviously, we want things to move quickly and be more predictable, but what is it about moving to ITSM rapidly that benefits? And I know this is rather basic, but I think we need to do it just for all the types of listeners we have.

Back to you, John. Explain and unpack what we mean by rapid ITSM as a means to better IT performance and rapid management of projects.

Best practices

Stagaman: For an organization that is new to ITSM processes, starting with a foundational approach and moving in with an out-of-box build helps them align with best practice and can be a lot faster than if they try to develop from scratch. SaaS is a model for that, because with SaaS you're essentially saying you're going to use this standard package.

The standard package is strong, and there's more leverage to use that. We had a federal customer that, based on best practice, reorganized how they did all their service levels. Those service levels were aligned with services that allowed them, for the first time, to report to their consuming bureaus the service levels per application that those bureaus subscribed to. They were able to provide much more meaningful reporting.

They wouldn’t have done that necessarily if the model didn't point in that direction. Previously, they hadn't organized their infrastructure along the lines to say, "We provide these application services to our customer."

Gardner: Erik, how do see the relationship between rapid and better ITSM and better IT overall performance? Are there many people struggle with this relationship?

Engstrom: Our approach at Effectual, what we focus on, is the accountability of data and the ability for an organization to reduce waste through using good data. We're not service [process] management experts, in that we are going to define a best practice; we are strictly on “here is the best piece of data everyone on your team is working [with] across all tools.” In that way, what our customers are able to see is transparency. So data from one system is available on another system.

Those kinds of mistakes are reduced when you share across tools. So that’s our focus and that’s where we're seeing benefit.

What that means is that you see a lot more reduction in types of servers that are being taken offline when they're the wrong server. We had a customer bring down their [whole] retail zone of systems that the same team had just stood up the week before. Because of the data being good, and the fact they were using out-of-the-box features, they were able to reduce mistakes and business impact they otherwise would not have seen.

Had they stayed with one tool or one silo of data, it’s only one source of opinion. Those kinds of mistakes are reduced when you share across tools. So that’s our focus and that’s where we're seeing benefit.

Gardner: Philipp, can you tell us why rapid ITSM has a powerful effect here in the market? But, before we get into that and how to do it, why is rapid ITSM so important now?

Koch: What we're seeing in our market is that customers are demanding service like they're getting at home at the end of the day. This sounds a little bit cliché-like, but they would like to get something ordered on the Internet, have it delivered 10 minutes later, and working half an hour later.

If we're talking about doing a classical waterfall approach to projects as was done 5 or 10  years ago, we're talking about months, and that’s not what the customer wants.

IT is delivering that. In a lot of organizations, IT is still fairly slow in delivering bigger projects, and ITSM is considered a bigger project. We're seeing a lot of shadow IT appearing, where business units who are demanding that agility are not getting it from IT, So they're doing it themselves, and then we have a big problem.

Counter the trend

With rapid ITSM, we can actually counter that trend. We can go in and give our customers what's needed to be able to please the business demand of getting something fast. By fast, we're talking about weeks now. We're of course not talking 10 minutes in project sizes of an ITSM implementation, but we can do something where we're deploying a SaaS solution.

We can have it ready for production after a week or two and get it into use. Before, when we did on-premise or when we did tailoring from scratch, we were talking months. That’s a huge business advantage or business benefit of being able to deliver what the business units are asking for.

Gardner: John Stagaman, what holds back successful rapid ITSM approach? What hinders speed, why has it been months rather than days typically?

Stagaman: Erik referenced one thing already. It has to do with the quality of source data when you go to build a system. One thing that I've run into numerous times is that there is often an assumption that finding all the canonical sources of data for just the general information that you need to drive your IT system is already available and it’s easy to populate. By that I mean things like, what are our locations, what are our departments, who are our people?

The other major thing that I run into that introduces risks into a project is when requirements aren't really requirements.

I'm not even getting to the point of asking what are our configuration items and how are they related? A lot of times, the company doesn't have a good way to even identify who a person is uniquely over time, because they use something with their name. They get married, it changes, and all of a sudden that’s not a persistent ID.

One thing we address early is making sure that we identify those gold sources of data for who and what, for all the factual data that has to be loaded to support the process.

The other major thing that I run into that introduces risks into a project is when requirements aren't really requirements. A lot of times, when we get requirements, it’s a bunch of design statements. Those design statements are about how they want to do this in the tool. Very often, it’s based on how the tool we're replacing worked.

If you don't go through those and say that this is the statement of design and not a statement of functional requirement and ask what is it that they need to do, it makes it very hard to look at the new tools you're deploying to say that this new tool does that this way. It can lead to excess customization, because you're trying to meet a goal that isn’t consistent with how your new product works.

Those are two things we usually do very early on, where we have to quality check the requirements, but those are also the two things that most often will cause a project to extend or derail.

Gardner: Philipp, any thoughts on problems, hurdles, why poor data quality or incomplete configuration management and data? What is it, from your perspective, that hold things back?

Old approach

Koch: I agree with what John says. That’s definitely something that we see when we meet customers.

Other areas that I see are more towards the execution of the projects itself. Quite often, customers know what agile is, but they don’t understand it. They say they're doing something in an agile way. Then, they show us a drawing that has a circle on it and then they think they are agile.

When you start to actually work with them, they're still in the old waterfall approach of stage gates, and milestones.

So, you're trying to do rapid ITSM implementation that follows agile principles, but you're getting stuck by internal unawareness or misunderstanding what this really means. Therefore, you're struggling with doing an agile implementation, and they become non-agile by doing this. That, of course, delays projects.

Quite often, we see that. So in the beginning of the projects, we try to have a workshop or try to get the people to understand what it really means to do an agile project implementation for an ITSM project. That’s one angle.

They should be asking whether it's easy to tailor the solution. It doesn’t really matter how.

The other angle, which I also see quite often, goes into the area of the requirements, the way John had described them. Quite often, those requirements are really features, as in they are hidden features that the customer wants. They are turned into some sort of requirements to achieve that feature. But very seldom do we see something that actually addresses the business problem.

They should not really care if you can right-click in the background and add a new field to this format. That’s not what they should be asking for. They should be asking whether it's easy to tailor the solution. It doesn’t really matter how. So that’s where quite often you're spending a lot of time reading those requirements and then readjusting them to match what you really should be talking about. That, of course, delays projects.

In a nutshell, we technology guys, who work with this on a daily basis, could actually deliver projects faster if we could manage to get the customers to accept the speed that we deliver. I see that as a problem.

Gardner: So being real about agile, having better data, knowing more about what your services are and responding to them are all part of overcoming the inertia and the old traditional approaches. Let’s look more deeply into what makes a big difference as a solution in practice.

Erik Engstrom, what helps get agile into practice? How are we able to overcome the drawbacks of over-customization and the more linear approach? Do you have any thoughts about moving towards a solution?

Maturity and integration

Engstrom: Our approach is to provide as much maturity, and as complete an integration as possible, on day one. We've developed a huge amount of libraries of different packages that do things such as to advance the tuning of a part of a tool, or to advance the integration between tools. Those represent thousands of hours that can be saved for the customer. So we start a project with capabilities that most projects would arrive at.

This allows the customer to be agile from day one. But it requires that mentality that both Philipp and John were speaking about, which is, if there’s a holdout in the room that says “this is the way you want things,” you can’t really work with the tools the way that they [actually] do work. These tools have a lot of money and history behind them, but one person’s vision of how the tools should work can derail everything.

We ask customers to take a look at an interoperable functioning matured system once we have turned the lights on, and have the data moving through the system. Then they can start to see what they can really do.

It’s a shift in thinking that we have covered well over the last few minutes, so I won't go into it. But it's really a position of strength for them to say, "We've implemented, we’ve integrated. Now, where do we really want to go with this amazing solution?

So the faster we can help customers start to see a working system with their data, the easier it is to start to move and maintain an agile approach.

Gardner: What is it about the new toolset that’s allowing this improvement, the pre-customization approach? How does the technology come to bear on what’s really a very process-centric endeavor?

Engstrom: There are certain implementation steps that every customer, every project, must undergo. It’s that repetition that we're trying to remove from the picture. It’s the struggle of how to help an organization start to understand what the tools can do. What does it really look like when people, party, location, and configuration information is on hand? Customers can’t visualize it.

So the faster we can help customers start to see a working system with their data, the easier it is to start to move and maintain an agile approach. You start to say, "Let’s keep this down to a couple of weeks of work. Let us show it to you. Let’s visit it."

If we're faster as consultancies, if we're not taking six months, if we're not taking two months and we can solve these things, they'll start to follow our lead. That’s essential. That momentum has to be maintained through the whole project to really deliver fast.

Gardner: John Stagaman, thoughts about moving fast, first as consultants, but then also leveraging the toolsets? What’s better about the technology now that, in a sense, changes this game too?

Very different

Stagaman: In the ITSM space, the maturity of the product out of box, versus 10 years ago, is very different.  Ten or 15 years ago, the expectation was that you were going to customize the whole thing.

There would be all these options that were there so you could demo them, but they weren’t necessarily built in a cohesive way. Today, the tools are built in different ways so that it's much closer to usable and deployable right out of the box.

The newest versions of those tools very often have done a much better job of creating broadly applicable process flow, so that you can use that same out of the box workflow if you're a retailer, a utility, or want to do some things for the HR call center without significant change to the core workflow. You might need to have the specific data fields related to your organization.

And, there's more. We can start from this ITSM framework that’s embedded and extend  it where we need to.

Gardner: Philipp, thoughts about what’s new and interesting about tools, and even the SaaS approach to ITSM, that drives, from the technology perspective, better results in ITSM?

If you’re looking at ITSM solutions today, they're web based. They're Web 2.0 technology, HTML5, and responsive UIs.

Koch: I'll concur with John and Erik that the tools have changed drastically. When I started in this business 10 or 15 years ago, it was almost like the green screens of computers that slide through when you look for the ITSM solution.

If you’re looking at ITSM solutions today, they're web based. They're Web 2.0 technology, HTML5, and responsive UIs. It doesn’t really matter which device you use anymore, mobile phone, tablet, desktop, or laptop. You have one solution that looks the same across all devices. A few years ago, you had to install a new server to be able to run a mobile client, if it even existed.

So, the demand has been huge for vendors to deliver upon what the need is today. That has drastically changed in regards to technology, because technology nowadays allows us, and allows the vendors, to deliver up on how it should be.

We want Facebook. We want to Tweet. We want an Amazon- or a Google-like behavior, because that’s what we get everywhere else. We want that in our IT tools as well, and we're starting to see that coming into our IT tools.

In the past we had rule sets, objects, and conditions towards objects, but it wasn’t really a workflow engine. Nowadays, SaaS solutions, as well as on-premise solutions, have workflow engines that can be adjusted and tailored according to the business needs.

No difference

You’re relying on a best practice. An incident management process flow is an incident management process flow. There really is no difference no matter which vendor you go to, they all look the same, because they should. There is a best practice out there or a good practice out there. So they should look the same.

The only adjustments that customers will have to do is to add on that 10-20 percent that is customer-specific with a new field or a specific approval that needs to be put in between. That can be done with minimal effort when you have workflow engine.

Looking at this from a SaaS perspective, you want this off the shelf. You want to be able to subscribe to this on the Internet and adjust it in the evening, so when you come back the next day and go to work, it's already embedded in the production environment. That's what customers want.

Gardner: Now if we’ve gotten a better UI and we're more ubiquitous with who can access the ITSM and how, maybe we've also muddied the waters about that data, having it in a single place or easily consolidated. Let’s go back to Erik, given that you are having emphasis on the data.

Unleash the power of your user base ...
Learn how to use big data for proactive problem solving

with a free white paper 

When we look at a new-generation ITSM solution and practice, how do we assure that the data integrity remains strong and that we don't lose control, given that we're going across peers of devices and across a cloud and SaaS implementations? How do we keep that data whole and central and then leverage it for better outcomes?

Engstrom: The concept of services and the way that service management is done is really around services. If we think about ITIL and the structure of ITIL [without getting into too many acronyms] the ability to take Services, Assets, and Configuration Management information, [and to have] all of that be consistent, it needs to be the same.

A platform that doesn't have really good bidirectional working data integrations with things like your asset tool or your DCIM tool or your UCMDB tool or your – wherever it is your data is coming from-- the data needs to be a primary focus for the future.

Because we're talking about a system [UCMDB] that can not only discover things and manage computers, but what about the Internet of Things? What about cloud scenarios, where things are moving so quickly that traditional methods of managing information whether it would be a spreadsheet or even a daily automated discovery, will not support the service-management mission?

It's very important, first of all, that all of the data be represented. Historically, we’ve not been able to do that because of performance. We've not been able to do that because of complexities. So that’s the implementation gap that we focus on, dropping in and making all of the stuff work seamlessly.

Same information

The benefit to that is that you’re operating as an organization on the same piece of information, no matter how it’s consumed or where it’s consumed. Your asset management folks would open their HP IT Asset Manager, see the same information that is shown downstream at Service Manager. When you model an application or service, it’s the same information, the same CI managed with UCMDB, that keeps the entire organization accountable. You can see the entire workflow through it.

If you have the ability to bridge data, if you have multiple tools taking the best of that information, making it an inherent automated part of service management, means that you can do things like Incident and Change, and Service Asset and Configuration Management (SACM) and roll up the costs of these tickets, and really get to the core of being efficient in service management.

Gardner: John Stagaman, if we have rapid ITSM multiple device ease of interface, but we also now have more of this drive towards the common data shared across these different systems, it seems to me that that leads to even greater paybacks. Perhaps it's in the form of security. Perhaps it's in a policy-driven approach to service management and service delivery.

Any thoughts about ancillary or future benefits you get when you do ITSM well and then you have that quality of data in mind that is extended and kept consistent across these different approaches?

The ability to know what’s connected to your network can identify failure points and chokepoints or risks of failure in that infrastructure.

Stagaman: Part of it comes to the central role of CMDB and the universality of that data. CMDB drives asset management. It can drive ITSM and the ability to start defining models and standards and compare your live infrastructure to those models for compliance along with discovery.

The ability to know what’s connected to your network can identify failure points and chokepoints or risks of failure in that infrastructure. Rather than being reactive, "Oh, this node went down. We have to address this," you can start anticipating potential failures and build redundancy. Your possibility of outage can be significantly reduced, and you can build that CMDB and build the intelligence in, so that you can simulate what would happen if these nodes or these components went down. What's the impact of that?

You can see that when you go to build, do a change, that level of integration with CMDB data lets you see well, if we have a change and we have an outage for these servers, what's the impact on the end user due to the cascading effect of those outages through the related devices and services so that you can really say, well, if we bring this down, we were good, but oh, at the same time we have another change modifying this and with those together coming down we may interrupt service to online banking and we need to schedule those at different times.

The latest update we're seeing is the ability to put really strict controls on the fact that this change will potentially impact this system or service and based on our business rules that say that this service can only be down during these times or may not be down at that time. We can even identify that time period conflict in an automated way and require additional process approvals for that to go forward at that time or require a reschedule.

Gardner: Philipp, any thoughts on this notion of predictive benefits from a good ITSM and good data, and perhaps even this notion of an algorithmic approach to services, delivery, and management?

Federation approach

Koch: It actually nicely fits into one of our reference installations, where that integration that Erik also talked about of having the data and utilize the data in a kind of on-the-fly federation approach. You can no longer wait to have a daily batch job to run. You need to have it at your fingertips. I can take an example from an Active Directory integration where we utilized all the data from active directory to allocate roles and rights and access inside HP Service Manager.

We've made a high-level analysis of how much we actually save by doing this. By doing that integration and utilizing that information, we say that we have an 80 percent reduction of manual labor done inside service manager for user administration.

Instead of having a technician to have to go into service manager to allocate the role, or to allocate rights, to a new employee who needs access to HP Service Manager, you actually get it automatic from Active Directory when the user logs in. The only thing that has to be done is for HR to say where this user sits, and that happens no matter what.

We've drastically reduced the amount of time spent there. There's a tangible angle there, where you can save a lot of time and a lot of money, mainly with regards to human effort.

With big-data analytics, you'll be able to see that that manual change model is used often and it could be easily automated.

The second angle that you touched on is smart analytics, as we can call it as well, in the new solutions that we now have. It's cool to see, and we now need to see where it's going in the future and see how much further we can go with this. We can do smart analytics on utilizing all the data of the solutions. So you're using the buzzword big data.

If we go in and analyze everything that's happening to a change-management area, we now have KPIs that can tell me -- this an old KPI as such -- that 48 percent of your change records have an element of automation inside the change execution. You have the KPI of how much you're automating in change management.

With smart analytics on top of that, you can get feedback in your KPI dashboard that says you have 48 percent. That’s nice, but below that you see if you enhance those two change models as well and automate them, you'll get an additional 10 percent of automation on your KPI.

With big-data analytics, you'll be able to see that manual change model is used often and it could be easily automated. That is the area that is so underutilized in using such analytics to go and focus on the areas that actually really make a difference and to be able to see that on a dashboard for a change manager or somebody who is responsible for the process.

That really sticks into your eye and says “Well, if I spend half an hour here, making this change model better, then I am going to save a lot more time, because I am automating 10 percent more." That is extremely powerful. Now just extrapolating that to the rest of the processes, that’s the future.

Gardner: Well Erik, we've heard both John and Philipp describe intelligent ITSM. Do you have any examples where some of your customers are also exploring this new level of benefit?

Success story

Engstrom: Absolutely. Health Shared Services British Columbia (HSSBC) will be releasing a success story through HP shortly, probably in the next few weeks. In that case, it was a five-week implementation where we dropped in our packages for Asset Management (ITAM), Service Management (ITSM), and Executive Scorecard, which are all HP products.

We even used Business Service Management (BSM), but the thinking behind this was that this is a service-management project. It’s all about uniting different health agencies in British Columbia under one shared service.

The configuration information is there. The asset information is there, right down to purchase orders, maintenance contracts, all of the parties, all of the organizations. The customer was able to identify all of their business services. This was all built in, normalized in CMDB, and then pushed into ITSM.

With this capability, they're able to see across these various organizations that roll-up in the shared service, who the parties are, because people opening tickets don’t work with those folks. They're in different organizations. They don’t have relevant information about what services are impacted. They don't have relevant information about who is the actual cost center or their budget. All that kind of stuff that becomes important in a shared service.

The customer was able to identify all of their business services. This was all built in, normalized in CMDB, and then pushed into ITSM.

This customer, from week six to their go-live day had the ability see, what is allocated in assets, what is allocated in terms of maintenance and support, and this is the selected service that the ticket, incident, or change is being created upon.

They understood the impact for the organization as a result of having what we call a Configuration Management System (CMS), having all of these things working together. So it is possible. It gives you very high-level control, particularly when you put it into something like Executive Scorecard, to see where things are taking longer, how they're taking longer, and what's costing more.

More importantly, in a highly virtual environment, they can see whether they're oversubscribed, whether they have their budgeted amount of ESX servers, or whether they have the right number of assets that are playing a part in service delivery. They can see the cost of every task, because it's tied to a person, a business service, and an organization.

They started with a capability to do SACM, and this is what this case is really about. It plays into everything that we've talked about in this call. It's agile and it is out-of-the-box. They're using features from all of these tools that are out-of-the-box, and they're using a solution to help them implement faster.

They can see what we call “total efficiency of cost.” What am I spending, but really how is it being spent and how efficient is it? They can see across the whole lifecycle of service management. It’s beautiful.

Future trends

Gardner: It’s impressive. What is it about the future trends that we can now see or have a good sense of how the events fold that makes rapid ITSM adoption, this common data, and this intelligent ITSM approach, all so important?

I'm thinking perhaps the addition of mobile tier and extensibility out through new networks. I'm thinking about DevOps and trying to coordinate a rapid-development approach with operations and making that seamless.

We're hearing a lot about containers these days as well. I'm also thinking about hybrid cloud, where there's a mixture of services, a mixture of hosting options, and not just static but dynamic, moving across these boundaries.

So, let's go down the list, as this would be our last question for today. John Stagaman, what is it about some of these future trends that will make ITSM even more impactful, even more important?

Stagaman: One of the big shifts that we're starting to see in self-service is the idea that you want to enable a customer to resolve their own issue in as many cases as possible. What you can see in the newest release of that product is the ability for them to search for a solution and start a chat.

The other thing that we're seeing is the ability to bridge between on-premises system and SaaS solution.

When they ask a question, they can check your entire knowledge base and history to see the propose solutions. If that’s not the case, they can ask for additional information and then initialize a chat with the service desk, if needed.

Very often, if they say they're unable to open this file or their headset is broken, someone can immediately tell them how to procure a replacement headset. It allows that person to complete that activity or resolve their issue in a guided way. It doesn't require them to walk through a level of menus to find what they need. And it makes it much more approachable than finding a headset on the procurement system.

The other thing that we're seeing is the ability to bridge between on-premises system and SaaS solution. We have some customers for whom certain data is required to be onsite  for compliance or policy reasons. They need an on-premise system, but they may have some business units that want to use a SaaS solution.

Then, when they have system supported by central IT, that SaaS system can do an exchange of that case with the primary system and have bidirectional updates. So we're getting the ability to link between the SaaS world and the on-premises world more effectively.

Gardner: Philipp, thoughts from you on future trends that are driving the need for ITSM that will make it even more valuable, make it more important.

Connected intelligence

Koch: Definitely. Just to add on to what John said, it goes into the direction of the connected intelligence, utilizing that big data example that we have just gone through. It all points towards that we want to have a solution that is connected across and brings back intelligence towards the end user, just as much as towards the operator that has that integration.

Another angle, more from the technology side, is that now, with the SaaS offerings that we have today, the new way of going forward as I see it happening -- and the way I think HP has made a good decision with HP Service Anywhere -- is the continuous delivery. You're losing the aspects of having version numbers for software. You no longer need to do big upgrades to move from version 9 to a version 10, because you are doing continuous delivery.

Every time new code is ready to be deployed, it is actually deployed. You do not wait and bundle it up in a yearly cycle to give a huge package that means months of upgrading. You're doing this on the fly. So Service Anywhere or Agile Manager are good examples where HP is applying that. That is the future, because the customer doesn’t want to do upgrade projects anymore. Upgrades are of the past, if we really want to believe that. We hope we can actually go there.

Mobile and bring your own device were buzzwords -- now it's already here. We don’t really need to talk about it anymore, because it already exists.

You touched on mobile. Mobile and bring your own device were buzzwords -- now it's already here. We don’t really need to talk about it anymore, because it already exists. That’s now the standard. You have to do this, otherwise you're not really a player in the market.

To close off with a paradigm statement, future solutions need to be implemented -- and we consultants need to deliver solutions -- that solve end-user problems compared to what we did in the past, where we deployed solutions manage tickets!

We're no longer in the business of helping them and giving them features to more easily manage tickets and save money on quicker resolution. This is of the past. What we need to do today is to make it possible for organizations to empower end users to solve their problems themselves to become a ticket-less IT -- this is ideal world of course -- where we reduce the cost of an IT organization by giving as much as possible back to the end user to enable him to do self service.

Gardner: Last word to you, Erik. Any thoughts about future trends to drive ITSM and why it will be even more important to do it fast and do it well?

Engstrom: Absolutely. And in my worldview it's SACM. It's essentially using vendor strengths, the portfolio, the entire portfolio, such as HP’s Service and Portfolio Management (SPM), where you have all of these combined silos that normally operate completely independently of each other.

There are a couple of truths in IT. Data is expensive to re-create; the concept that you have knowledge, and that you have value in a tool. The next step in the new style of IT is going to require that these tools work together as one suite, one offering, so that your best data is coming from the best source and being used to make the best decisions.

Actionable information

It's about making big data a reality. But in the use of UCMDB and the HP portfolio, data is very small, it's actionable information, because it's a set of tools. This whole portfolio helps customers save money, be more efficient with where they spend, and do more with “yes.”

Unleash the power of your user base ...
Learn how to use big data for proactive problem solving

with a free white paper 

So the idea that you have all of this data out there, what can it mean? It can mean, for example, that you can look and see that a business service is spending 90 percent more on licensing or ESX servers or hardware, anything that it might need. You have transparency across the board.

Smarter service management means doing more with the information you already have and making informed decision that really help you drive efficiencies. It's doing more with “yes,” and being efficient. To me, that’s SACM. The requirement for a portfolio, it doesn’t matter how small or how large it is, is [that] it must provide the ways for which this data can be shared, so that information becomes intelligence.

Organizations that have these tools will beat the competition. They will wipe them out, because they're so efficient and so informed.

Organizations that have these tools will beat the competition at an SG and A (Selling, General and Administrative) level. They will wipe them out, because they're so efficient and so informed. Waste is reduced. Time is faster. Good decisions are made ahead of time. You have the data and you can act appropriately. That's the future. That's why we support HP software, because of the strength of the portfolio.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  Erik Engstrom  Fast ITSM  HP  Interarbor Solutions  ITSM  John Stagaman  Philipp Koch 

Share |
PermalinkComments (0)
 

Journey to SAP quality — Home Trust builds center of excellence with HP ALM tools

Posted By Dana L Gardner, Wednesday, October 15, 2014

The next BriefingsDirect deep-dive IT operations case study interview details how Home Trust Company in Toronto has created a center of excellence to improve quality assurance for top performance of their critical SAP applications.

How do you properly structure your testing assets in quality control that makes sense for SAP?  What’s your proper defect flow? How do you design a configuration that fits all from the toolset? And where does automation best come into play?

These are some of the essential questions to answer for not only making apps perform well, but to allow for rapid deployment and refinement of new applications, as well as enhance ongoing security and compliance for both systems and data.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about building a center of excellence for business applications, BriefingsDirect sat down at the recent HP Discover 2014 Conference in Las Vegas with Cindy Shen, SAP QA Manager at Home Trust. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Shen: Home Trust one of the leading trust companies in Toronto, Canada. There are two main businesses we deal with. The first bucket is mortgages. We deal with a lot of residential mortgages.

Shen

The other bucket is we're a deposit-taking institution. People will deposit their money with us, and they can invest in a registered retirement savings plan (RRSP) (along with other options for their investment), which is equivalent of the US 401(k) plan.

We're also Canada Deposit Insurance Corporation (CDIC)-compliant. If a customer has money with us and if anything happens with the company, the customer can get back up to a certain amount of money.

We're regulated under the Office of the Superintendent of Financial Institutions (OSFI), and they regulate the Banks and Trust Companies, including us.

Some of the hurdles

Gardner: So obviously it's important for you to have your applications running properly. There's a lot of auditing and a lot of oversight. Tell us what some of the hurdles were, some of the challenges you had as you began to improve your quality-assurance efforts.

Shen: We're primarily an SAP shop. I was an SAP consultant for a couple of years. I've worked in North America, Europe, and Asia. I’ve been through many industries, not just the financial industry. I've touched on consumer packaged goods SAP projects, retail SAP projects, manufacturing SAP projects, and banking SAP projects. I usually deal with global projects, 100 million-plus, and 100-300 people.

What I noticed is that, regardless of the industries or the functional solutions that project has, it's always a common set of QA challenges when it comes to their SAP testing and it’s very complicated. It took me a couple of years to figure the tools, where each tool fits into the whole picture, and how pieces fit together.

For example, some of the common challenges that I'm going to talk about in my session (here at HP Discover) is, first of all, what tools you should be using. The HP ALM, Test Management Tool is, in my opinion, the market leader. That's what pretty much all the Fortune 500 companies, and even smaller companies, are using primarily as their test management tool. But testing SAP is unique.

Reduce post-production issues by 80% by building better apps.  

Learn Seven Best Practices for Business-Ready Applications

with a free white paper.

What are the additional tools on the SAP side that you need to have in order to integrate back to ALM test suite and have that system record of development plus the system record of testing, all integrated together, and make it flow which makes sense for SAP applications? That’s unique.

Most errors and defects happen in the integration area.

One is toolset and the other one is methodology. If you parachute me into any project, however large or small, complex or simple, local or global, I can guarantee you that the standards are not clear, or there is no standard in place.

For example, how do you properly write a test case to test SAP? You have to go into the granular detail that actually details the action words that you use for different application areas that can enable automation very easily in the future. How do you parameterize?

What’s the appropriate level of parameterization to enable that flexibility for automation? What’s the naming convention for your input parameter and output parameters to make it flow through from the very first test case, all the way to the end, when you test end to end application?

Most errors and defects happen in the integration area. So, how do you make sure your test coverage covers all your key integration points? SAP is very complex. If you change one thing, I can guarantee you that there's something else in some other areas of the application or in the interface that’s going to change without your knowing it, and that’s going to cause problems for you sooner or later.

So, how do you have those standards and methodology consistently enforced through every person who's writing test cases or who's executing testing at the same quality, in the same format, so that you can generate the same reports across all different projects to have the executive oversight and to minimize the duplucate work you have to do on the manual test cases in order to automate in the future.

Testing assets

The other big part is how to maintain such testing assets, so it's repeatable, reusable, and flexible -- and so that you can shorten your project delivery time in the future through automation and a consistent writing test case in manual testing, accelerate new projects coming up, and also improve your quality in terms of post-production support so you can catch critical errors fast.

Those are all very common SAP testing QA themes, challenges, or problems that practitioners like me see in any SAP environment.

Gardner: So when you arrived at Home Trust, and you understood this unique situation, and how important SAP applications are, what did you do to create a center of excellence and an ability to solve these issues?

Shen: I was fortunate to have been the lead on the SAP area for a lot of global projects. I've seen the worst of it. I've also seen a fraction of the clients that actually do it much better than other companies. So, I'm fortunate to know the best practices I want to implement, what will work, and what won't work, what are the critical things you have to get in place in the beginning, and what are the pieces you can wait for down the road.

We had to assess the current status and make sure to come up with a methodology that made sense for Home Trust Company.

Coming from an SAP background, I'm fortunate to have that knowledge. So, from the start, I had a very clear vision as to how I wanted to drive this. First, you need to conduct an analysis of the current state, and what I saw was very common in the industry as well.

When I started, there were only two people in the QA space. It was a brand new group. And there was an overall software development lifecycle (SDLC) methodology in the company. But the company had just gone live with SAP application. So it was basically a great opportunity to set up a methodology, because it was a green field. That was very exciting.

One of the things you have to have is an overarching methodology. Are you using Business Process Testing (BPT), or are you using some other methodology. We also had to comply with, or fit in with, the methodology of SAP which is ASAP, and that’s primarily the industry standard in the SAP space as well. So, we had to assess the current status and make sure to come up with a methodology that made sense for Home Trust Company.

Two, you had to get all the right tools in place. So, Home Trust is very good at getting the industry-leading toolsets. When I joined, they already had HP QC. At that time, it was called QC; now it's ALM. Solution Manager, was part of the SAP solution of the purchase. So, it was free. We just had to configure and implement it.

We also had QTP, which now is called UFT, and we also had LoadRunner. All the right toolsets were already in place. So I didn't have to go through the hassle of procuring all those tools.

Assessing the landscape

When we assessed the landscape of tools, we realized that, like any other company, they were not maximizing the return on investment (ROI) on the toolsets. The toolsets were not leveraged as much, because in a typical SAP environment, the demand of time to market is very high for project delivery and new product introduction.

When you have a new product, you have to configure the system fast, so it’s not too late to bring the product to the market. You have a lot of time pressure. You also have resource constraints, just like any other company. We started with two people, and we didn’t have a dedicated testing team. That was also something we felt we had to resolve.

We had to tackle it from a methodology and a toolset perspective, and we had to tackle it from a personnel perspective, how to properly structure the team and ramp the resource up. We had to tackle it through those three perspectives. Then, after all the strategic things are in place, you figure out your execution pieces.

From a methodology perspective, what are the authoring  standards, what are action words, and what are naming conventions? I can't emphasize this enough, because I see it done so differently on each project. People don’t know the implications  down the road.

It's different from company to company. You have to figure out the minimum effort required, but what makes sense.

How do you properly structure your testing assets in QC that makes sense for SAP? That is a key area. You can't structure at too high of a level. That means that you have a mega scenario of everything in one test case or just a few test cases. If something changes, which I can guarantee you it will, something changes in the application, because you have to redevelop it or modify it for another feature.

If you structure your testing assets at such a high level, you have to rewrite every single asset. You don’t know where it’s changing something somewhere else, because you probably hard-coded everything.

If you put it at a too much of a granular level, maintenance becomes a nightmare. It really has to be at the right level to enable the flexibility and get ready for automation. It also has to be easy to maintain, because maintenance is usually a higher cost than the actual initial creation. So, those are all the standards we are setting up.

What’s your proper defect flow? It's different from company to company. You have to figure out the minimum effort required, but what makes sense. You also have to have the right control in place for this company. You have to figure out naming conventions, the relevant test cases, and all that. That's the methodology part of it.

The toolset is a lot more technical. If you're talking about the HP ALM Suite, what's the standard configuration you need to enable for all your projects? I can guarantee you that every company has concurrent projects going on after post-production.

Even when they're implementing their initial SAP, there are many concurrent streams going on at the same time. How do you make sure its configuration accommodates all the different types of projects? However, with the same set of configuration -- this is a key point -- you cannot, let me repeat, you cannot, have very different configurations for HP ALM  across different projects.

Sharing assets

This will prevent you from sharing the test assets across different projects or prevent you from automating them in the same manner or automating them for the near future and prevent you from delivering projects consistently with consistent quality and with consistent reporting format across the company. It prevents all of those and that would generate nightmares for maintenance and having standards put in place. That’s key. I can't  emphasize that enough.

So from the toolset, how do you design a configuration that fits all? That’s the mandate. The rule of thumb is do not customize. Use out-of-box functionality. Do not code. If you really have to write a query, minimize it.

The good thing about HP ALM is that it's flexible enough to accommodate all the critical requests. If you find you have to write something for it or you have to have a custom field or custom label, you probably should consider changing your process first, because ALM is a pretty mature toolset.

Reduce post-production issues by 80% by building better apps.  

Learn Seven Best Practices for Business-Ready Applications

with a free white paper.

I've been on very complex global projects in different countries. HP ALM is able to accommodate all the key metrics, all the key deliverables you're looking to deliver. It has the capacity.

When I see other companies that do a lot of customization, it's because their process isn't correct. They're fixing the tool to accommodate for processes that don’t make sense. People really have to have that open mind, and seek out the best practice and expertise in the industry to understand what out of box functionality to configure for HP ALM to manage their SAP projects, instead of weakening the tool to fit how they do SAP projects.

When I see other companies that do a lot of customization, it's because their process isn't correct.

Sometimes, it involves a lot of change management, and for any company, that’s hard. You really have to keep that open mind, stick with the best practice, and think hard about whether your process makes sense or whether you really need to tweak the tool.

Gardner: It's fascinating that in doing due diligence on process, methodology, leveraging the tools, and recognizing the unique characteristics of this particular application set, if you do that correctly, you're going to improve the quality of that particular roll out or application delivery into production, and whatever modifications you need to do over time.

It's also going to set you up to be in a much better position to modernize and be aggressive with those applications, whether it's delivering them out to a mobile tier, for example, or whether there’s different integrations with different data. So when you do this well, there are multiple levels of payback. Right?

Shen: I love this question, because this is really the million-dollar view, or the million dollar understanding, that anybody can take away from this podcast or my session (at HP Discover). This is the million dollar vision that you should seriously consider and understand.

From an SAP and HP ALM perspective and the Center for Excellence, the vision is this (I'm going to go slowly, so you get all the components and all the pieces):

Work closely

SAP and HP work very closely. So your account rep will help you greatly in the toolsets in that area. It starts with Solution Manager from SAP, which should be your system record of development. The best part is when you implement SAP, you use Solution Manager to input all your Business Process Hierarchy (BPH). BPH is your key ingredient in Solution Manager that lays out all the processes in your environment.

Tied with it you should input all the transaction codes (T-codes). The DNA of SAP is T-codes. If you go to any place in SAP, most likely you have to enter a T-code. That will bring you to the right area. When we scope out an SAP project, the key starts with the list of T-codes. The key is to build out that BPH in SAP and associate all the T-codes in different areas.

With that T-code, you actually have all the documentation, functional specification, technical specification, all of the documentation and mapping associated at each level in your BPH along with your T-code. Not only that, you should have all your security IDs and metrics associated with each level at the BPH and T-codes, and all the flows and requirements all tied together, and of course the development, the code.

So, your Solution Manager should be the system record of development. The best practice is to always implement your SAP initial implementation with Solution Manager. So by the time you go live, you've already done all that. That’s the first bucket.

The second bucket is HP Tool Suite. We'll start with HP ALM Test Management Tool. It allows you to input your testing requirements, and they flow through the requirement to a test. If you’re using Business Process Testing (BPT), then you should flow through to the component in BPT, and flow through the test case module. Then, you flow through to the test plan, test lab and flow through to the defects. Everything is well integrated and connected.

Your Solution Manager should be the system record of development.

And then there is something we call an adapter. It’s a Solution Manager and HP ALM adapter. It enables Solution Manager and HP ALM to talk. You have to configure that adapter between Solution Manager and ALM. This is able to bring your hierarchy, your BPH in Solution Manager, and all the related assets, including the T-codes, over to the requirement model in HP ALM.

So if you have your Solution Manager straightened out, whatever you bring over to ALM, that's already your scope. It tells you what T-codes is in scope to test. By the way, in SAP it's often a headache that each T-code can do many, many things, especially if you're heavily customized.

So a T-code is not enough. You have to go down to a granular level of getting the variants. What are the typical scenarios or typical testing variants it has? Then, you can create that variance in the Solution Manager in the BPH. Then, it's going to flow through to the Requirement module in HP ALM and list out all your T-codes' possible variants.

Then, based on that, you start scoping out your testing assets. What are the components, test cases, or whatever you have to write. You put them in a BPT or you put them in your test case model. Then you link the requirement over. So you already have your test coverage. Then, you flow through a test case, flow through your execution in test lab, flow through to defects, and then it all ties back together.

And where does automation come in play? That's the bucket after HP ALM. So, UFT today is still the primary tool people use to automate. In the SAP space, SAP actually has its own. It's called, Test Acceleration and Optimization (TAO). That’s also leveraging UFT. That's the foundation to create a specific SAP automation, but either is fine. If you already have UFT, you really could start today.

Back and forth

So, the automation comes in place. This is very interesting. This is how it goes back and forth. For example, you already transported something to production and you want to check if anything slipped through the cracks? Is all the testing coverage there?

There's something called Solution Document Assistant. From the Solution Manager side, you can actually read from EarlyWatch reports to see what T codes are actually being used in your Production system today. After something is transported over into Prod, you can re-run it again to see what are the net new T-codes in the production system. Then, you can compare that. So there's a process.

Then you can see what are the net new ones from the BPH and flow through that to your HP QC or HP ALM, and see whether we have coverage for that. If not, here’s your scope for net new manual and automated testing.

I have yet to see a company that’s very good with documentation, especially with SAP.

Then, you keep building that regression and you eventually will get a library. That’s how you flow through back and forth. There is also something called Business Process Change Analyzer (BPCA). That already comes free with Solution Manager. You just have to configure it.

It allows you to load whatever you want to change in production into the buffer. So, before you actually transfer the code into production, you'll be able to know what area it impacts. It goes into the core level. So, it allows you to do targeted regression as well. We talked about Solution Manager. We talked about ALM. We talked about UFT. Then, there is LoadRunner, the performance center, the load testing, the performance testing, stress testing, etc., and this all goes into the same picture.

The ideal solution is that you can flow through your content in Solution Manager to HP ALM and you can enable automation for all tests together -- and all those performance, stress, whatever, testing -- in one end-to-end flow and you're able to build that regression library. You're able to build that technical testing library. And you're able to build that library and Solution Manager and maintain them at same time.

Gardner: So the technology is really powerful, but it's incumbent on the users to go through those steps of configuring, integrating, creating the diligence of the libraries and then building on that.

I'd like to go up to the business-level discussion. When you go to your boss's boss, can you explain to them what they're going to get as a value for having gone through this? It's one thing to do it because it's the right thing to do and it's got super efficient benefits, but that needs to translate into dollars and cents and business metrics. So what do you tell them you get at that business level when they do this properly?

Business takes notice

Shen: Very good question, because this exercise we did can be applied to any other companies. It's at the level that business really takes notice. One common challenge is that when you on-board somebody, do they have the proper documentation to ramp it up?

I yet have to see a company that’s very good with documentation, especially with SAP, where is that list of scope of all the T-codes that are today in production we use? What are the functional specs? What are the technical specs? Where is the field map? Where are the flows? You have to have that documentation in order to ramp somebody up or what typically ends up happening is that you hire somebody and you have to take other team members for a few weeks to ramp the person up.

Instead of putting them on the project to deliver right away, start writing the code, start configuring SAP, or whatever, they can’t start until few months later. How do you  accelerate that process? You build everything up with Solution Manager, you build everything up in HP ALM, you build everything up in your QTP and UFT and everything.

So this way, the person will come in, they can go to Solution Manager and look at all the T-codes and scope, look at all the updated T-codes, updated business areas, look at updated functional specs, understand what the company’s application does and what's the logic and what's configuration. Then, the person can easily go to HP ALM and figure out, the testing scenarios, how people test, how they use application, and what should be the expected behavior of the application.

Point one is that you can really speed up the hiring process and the knowledge transfer process for your new personnel. A more important application of this is on projects. Whether SAP or not, companies usually use very high-end products, because you have to constantly draw out new applications, new releases, and new features based on market conditions and based on business needs.

Testing is the most labor-intensive and painstaking process and probably one of the most expensive areas in any project delivery.

When a project starts, a very common challenge is the documentation of existing functionality? How can you identify what to build? If you have nothing, I can guarantee you that you'll spend a few weeks of the entire project team trying to figure out current status.

Again, with the library and Solution Manager, the regression testing suite, the automated suite in HP ALM and UFT, and all of that, you can get that on day one. It's going to shorten the project time. It's going to accelerate the project time with good quality.

The other thing is that a project is so important that anything in the project is very necessary. When you actually figure out your status quo, you start building.

Testing is the most labor-intensive and painstaking process and probably one of the most expensive areas in any project delivery. How do you accelerate that? Without existing regression library, documented test scenarios, and even automated existing regression libraries, you have to invent everything from scratch.

By the way, that involves figuring out the scope, the testing scope that involves writing the test case from scratch, building all the parameters, and building all the data. That takes a lot of time. If you already have an existing library, that’s going to shorten your lifecycle a lot.

So all this translates into dollar saving plus better coverage and faster delivery, which is key for business. By the way, when you have all this set in place, you're able to catch a lot more defects before it goes to production. I saw study that said it's about 10 times more expensive if you catch a defect in production. So the earlier you catch it, the better.

Security confidence

Gardner:  Right, of course. It also strikes me that doing this will allow you to have better security confidence, governance risk and compliance benefits, and auditability when that kicks in. In a banking environment, of course, that’s really important.

Shen: Absolutely. The HP ALM tool allows the complete audit trail for the testing aspect of it. Not at this current company, but on other projects, usually an auditor comes in and they ask for access to HP QC. They look at HP ALM, auto test cases, who executed, the recorded results, and defects, that’s what auditors look for.

Reduce post-production issues by 80% by building better apps.  

Learn Seven Best Practices for Business-Ready Applications

with a free white paper.

Gardner: Cindy, what is it that’s of interest to you here at HP Discover in terms of what comes next in HP's tool, seeing as they're quite important to you? Also, are you looking for anything in the HP-SAP relationship moving forward?

Shen: I love that question. Sometimes, I feel very lonely in this niche field. SAP is a big beast. HP-SAP integration is part of what they do, but it's not what they market. The good thing is that most SAP clients have HP ALM. It's a very necessary toolset for both HP and SAP to continue to evolve and support.

It's a niche market. There are only a handful of people in the world that can do this from end to end properly. HP has many other products. So, you're looking at a small circle of SAP end clients who are using HP toolsets, who need to know how to properly configure and run this efficiently and properly. Sometimes I feel very lonely, overlapping the circle of HP and SAP.

The good thing is that most SAP clients have HP ALM. It's a very necessary toolset for both HP and SAP to continue to evolve and support.

That’s why Discover is very important to me. It feels like a homecoming, just because here I'll actually speak to the project managers and experts on HP ALM sprinter, the integration, and the HP adapter. So I know what the future releases are. I know what's coming down the line, and I know the configuration I might have to change in the future.

The other really good of part, which I'm passionate about, having doing enough projects, is that I've helped clients, and there's always this common set of questions and challenges. It took me a couple of years to figure these out. There are many, many people out there in the same boat as I was years back, and I love to share my experience, expertise, and knowledge with the end clients.

They're the ones managing and creating their end-to-end testing. They're the ones facing all these challenges. I love to share with them what the best practices are, how to structure things correctly, so that you don’t have to suffer down the road. It really takes expertise to make it right. That’s what I love to share.

As far as the ecosystem of HP and SAP. I'd like to see them integrate more tightly. I'd like to see them engage more with the end-user community, so that we can definitely share the lessons and share the experience with end user more.

Also, I know all the vendors in the space. Basically, the vendors in the space are very niche and most of them come from SAP and HP backgrounds. So I keep running into people I know. My vendors keep running to people they know, and it's that community that’s very critical to enable success for the end user and for the business.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Cindy Shen  Dana Gardner  Home Trust Company  HP  HP ALM  HPDiscover  Interarbor Solutions  SAP  Solution Manager 

Share |
PermalinkComments (0)
 
Page 1 of 56
1  |  2  |  3  |  4  |  5  |  6  >   >>   >| 
Page Title
Association Management Software Powered by YourMembership.com®  ::  Legal