Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Dana Gardner's BriefingsDirect for Connect.
Blog Home All Blogs
Longtime IT industry analyst Dana Gardner is a creative thought leader on enterprise software, SOA, cloud-based strategies, and IT architecture strategies. He is a prolific blogger, podcaster and Twitterer. Follow him at http://twitter.com/Dana_Gardner.

 

Search all posts for:   

 

Top tags: Dana Gardner  Interarbor Solutions  BriefingsDirect  HP  cloud computing  The Open Group  big data  SaaS  virtualization  VMWare  data center  enterprise architecture  Ariba  HP DISCOVER  SOA  data analytics  HPDiscover  Open Group Conference  Ariba Network  security  SAP  Tony Baer  desktop virtualization  Jennifer LeClaire  VMWorld  HP Vertica  IT  mobile computing  Ariba LIVE  Converged infrastructure 

Journey to SAP quality — Home Trust builds center of excellence with HP ALM tools

Posted By Dana L Gardner, Wednesday, October 15, 2014

The next BriefingsDirect deep-dive IT operations case study interview details how Home Trust Company in Toronto has created a center of excellence to improve quality assurance for top performance of their critical SAP applications.

How do you properly structure your testing assets in quality control that makes sense for SAP?  What’s your proper defect flow? How do you design a configuration that fits all from the toolset? And where does automation best come into play?

These are some of the essential questions to answer for not only making apps perform well, but to allow for rapid deployment and refinement of new applications, as well as enhance ongoing security and compliance for both systems and data.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about building a center of excellence for business applications, BriefingsDirect sat down at the recent HP Discover 2014 Conference in Las Vegas with Cindy Shen, SAP QA Manager at Home Trust. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Shen: Home Trust one of the leading trust companies in Toronto, Canada. There are two main businesses we deal with. The first bucket is mortgages. We deal with a lot of residential mortgages.

Shen

The other bucket is we're a deposit-taking institution. People will deposit their money with us, and they can invest in a registered retirement savings plan (RRSP) (along with other options for their investment), which is equivalent of the US 401(k) plan.

We're also Canada Deposit Insurance Corporation (CDIC)-compliant. If a customer has money with us and if anything happens with the company, the customer can get back up to a certain amount of money.

We're regulated under the Office of the Superintendent of Financial Institutions (OSFI), and they regulate the Banks and Trust Companies, including us.

Some of the hurdles

Gardner: So obviously it's important for you to have your applications running properly. There's a lot of auditing and a lot of oversight. Tell us what some of the hurdles were, some of the challenges you had as you began to improve your quality-assurance efforts.

Shen: We're primarily an SAP shop. I was an SAP consultant for a couple of years. I've worked in North America, Europe, and Asia. I’ve been through many industries, not just the financial industry. I've touched on consumer packaged goods SAP projects, retail SAP projects, manufacturing SAP projects, and banking SAP projects. I usually deal with global projects, 100 million-plus, and 100-300 people.

What I noticed is that, regardless of the industries or the functional solutions that project has, it's always a common set of QA challenges when it comes to their SAP testing and it’s very complicated. It took me a couple of years to figure the tools, where each tool fits into the whole picture, and how pieces fit together.

For example, some of the common challenges that I'm going to talk about in my session (here at HP Discover) is, first of all, what tools you should be using. The HP ALM, Test Management Tool is, in my opinion, the market leader. That's what pretty much all the Fortune 500 companies, and even smaller companies, are using primarily as their test management tool. But testing SAP is unique.

Reduce post-production issues by 80% by building better apps.  

Learn Seven Best Practices for Business-Ready Applications

with a free white paper.

What are the additional tools on the SAP side that you need to have in order to integrate back to ALM test suite and have that system record of development plus the system record of testing, all integrated together, and make it flow which makes sense for SAP applications? That’s unique.

Most errors and defects happen in the integration area.

One is toolset and the other one is methodology. If you parachute me into any project, however large or small, complex or simple, local or global, I can guarantee you that the standards are not clear, or there is no standard in place.

For example, how do you properly write a test case to test SAP? You have to go into the granular detail that actually details the action words that you use for different application areas that can enable automation very easily in the future. How do you parameterize?

What’s the appropriate level of parameterization to enable that flexibility for automation? What’s the naming convention for your input parameter and output parameters to make it flow through from the very first test case, all the way to the end, when you test end to end application?

Most errors and defects happen in the integration area. So, how do you make sure your test coverage covers all your key integration points? SAP is very complex. If you change one thing, I can guarantee you that there's something else in some other areas of the application or in the interface that’s going to change without your knowing it, and that’s going to cause problems for you sooner or later.

So, how do you have those standards and methodology consistently enforced through every person who's writing test cases or who's executing testing at the same quality, in the same format, so that you can generate the same reports across all different projects to have the executive oversight and to minimize the duplucate work you have to do on the manual test cases in order to automate in the future.

Testing assets

The other big part is how to maintain such testing assets, so it's repeatable, reusable, and flexible -- and so that you can shorten your project delivery time in the future through automation and a consistent writing test case in manual testing, accelerate new projects coming up, and also improve your quality in terms of post-production support so you can catch critical errors fast.

Those are all very common SAP testing QA themes, challenges, or problems that practitioners like me see in any SAP environment.

Gardner: So when you arrived at Home Trust, and you understood this unique situation, and how important SAP applications are, what did you do to create a center of excellence and an ability to solve these issues?

Shen: I was fortunate to have been the lead on the SAP area for a lot of global projects. I've seen the worst of it. I've also seen a fraction of the clients that actually do it much better than other companies. So, I'm fortunate to know the best practices I want to implement, what will work, and what won't work, what are the critical things you have to get in place in the beginning, and what are the pieces you can wait for down the road.

We had to assess the current status and make sure to come up with a methodology that made sense for Home Trust Company.

Coming from an SAP background, I'm fortunate to have that knowledge. So, from the start, I had a very clear vision as to how I wanted to drive this. First, you need to conduct an analysis of the current state, and what I saw was very common in the industry as well.

When I started, there were only two people in the QA space. It was a brand new group. And there was an overall software development lifecycle (SDLC) methodology in the company. But the company had just gone live with SAP application. So it was basically a great opportunity to set up a methodology, because it was a green field. That was very exciting.

One of the things you have to have is an overarching methodology. Are you using Business Process Testing (BPT), or are you using some other methodology. We also had to comply with, or fit in with, the methodology of SAP which is ASAP, and that’s primarily the industry standard in the SAP space as well. So, we had to assess the current status and make sure to come up with a methodology that made sense for Home Trust Company.

Two, you had to get all the right tools in place. So, Home Trust is very good at getting the industry-leading toolsets. When I joined, they already had HP QC. At that time, it was called QC; now it's ALM. Solution Manager, was part of the SAP solution of the purchase. So, it was free. We just had to configure and implement it.

We also had QTP, which now is called UFT, and we also had LoadRunner. All the right toolsets were already in place. So I didn't have to go through the hassle of procuring all those tools.

Assessing the landscape

When we assessed the landscape of tools, we realized that, like any other company, they were not maximizing the return on investment (ROI) on the toolsets. The toolsets were not leveraged as much, because in a typical SAP environment, the demand of time to market is very high for project delivery and new product introduction.

When you have a new product, you have to configure the system fast, so it’s not too late to bring the product to the market. You have a lot of time pressure. You also have resource constraints, just like any other company. We started with two people, and we didn’t have a dedicated testing team. That was also something we felt we had to resolve.

We had to tackle it from a methodology and a toolset perspective, and we had to tackle it from a personnel perspective, how to properly structure the team and ramp the resource up. We had to tackle it through those three perspectives. Then, after all the strategic things are in place, you figure out your execution pieces.

From a methodology perspective, what are the authoring  standards, what are action words, and what are naming conventions? I can't emphasize this enough, because I see it done so differently on each project. People don’t know the implications  down the road.

It's different from company to company. You have to figure out the minimum effort required, but what makes sense.

How do you properly structure your testing assets in QC that makes sense for SAP? That is a key area. You can't structure at too high of a level. That means that you have a mega scenario of everything in one test case or just a few test cases. If something changes, which I can guarantee you it will, something changes in the application, because you have to redevelop it or modify it for another feature.

If you structure your testing assets at such a high level, you have to rewrite every single asset. You don’t know where it’s changing something somewhere else, because you probably hard-coded everything.

If you put it at a too much of a granular level, maintenance becomes a nightmare. It really has to be at the right level to enable the flexibility and get ready for automation. It also has to be easy to maintain, because maintenance is usually a higher cost than the actual initial creation. So, those are all the standards we are setting up.

What’s your proper defect flow? It's different from company to company. You have to figure out the minimum effort required, but what makes sense. You also have to have the right control in place for this company. You have to figure out naming conventions, the relevant test cases, and all that. That's the methodology part of it.

The toolset is a lot more technical. If you're talking about the HP ALM Suite, what's the standard configuration you need to enable for all your projects? I can guarantee you that every company has concurrent projects going on after post-production.

Even when they're implementing their initial SAP, there are many concurrent streams going on at the same time. How do you make sure its configuration accommodates all the different types of projects? However, with the same set of configuration -- this is a key point -- you cannot, let me repeat, you cannot, have very different configurations for HP ALM  across different projects.

Sharing assets

This will prevent you from sharing the test assets across different projects or prevent you from automating them in the same manner or automating them for the near future and prevent you from delivering projects consistently with consistent quality and with consistent reporting format across the company. It prevents all of those and that would generate nightmares for maintenance and having standards put in place. That’s key. I can't  emphasize that enough.

So from the toolset, how do you design a configuration that fits all? That’s the mandate. The rule of thumb is do not customize. Use out-of-box functionality. Do not code. If you really have to write a query, minimize it.

The good thing about HP ALM is that it's flexible enough to accommodate all the critical requests. If you find you have to write something for it or you have to have a custom field or custom label, you probably should consider changing your process first, because ALM is a pretty mature toolset.

Reduce post-production issues by 80% by building better apps.  

Learn Seven Best Practices for Business-Ready Applications

with a free white paper.

I've been on very complex global projects in different countries. HP ALM is able to accommodate all the key metrics, all the key deliverables you're looking to deliver. It has the capacity.

When I see other companies that do a lot of customization, it's because their process isn't correct. They're fixing the tool to accommodate for processes that don’t make sense. People really have to have that open mind, and seek out the best practice and expertise in the industry to understand what out of box functionality to configure for HP ALM to manage their SAP projects, instead of weakening the tool to fit how they do SAP projects.

When I see other companies that do a lot of customization, it's because their process isn't correct.

Sometimes, it involves a lot of change management, and for any company, that’s hard. You really have to keep that open mind, stick with the best practice, and think hard about whether your process makes sense or whether you really need to tweak the tool.

Gardner: It's fascinating that in doing due diligence on process, methodology, leveraging the tools, and recognizing the unique characteristics of this particular application set, if you do that correctly, you're going to improve the quality of that particular roll out or application delivery into production, and whatever modifications you need to do over time.

It's also going to set you up to be in a much better position to modernize and be aggressive with those applications, whether it's delivering them out to a mobile tier, for example, or whether there’s different integrations with different data. So when you do this well, there are multiple levels of payback. Right?

Shen: I love this question, because this is really the million-dollar view, or the million dollar understanding, that anybody can take away from this podcast or my session (at HP Discover). This is the million dollar vision that you should seriously consider and understand.

From an SAP and HP ALM perspective and the Center for Excellence, the vision is this (I'm going to go slowly, so you get all the components and all the pieces):

Work closely

SAP and HP work very closely. So your account rep will help you greatly in the toolsets in that area. It starts with Solution Manager from SAP, which should be your system record of development. The best part is when you implement SAP, you use Solution Manager to input all your Business Process Hierarchy (BPH). BPH is your key ingredient in Solution Manager that lays out all the processes in your environment.

Tied with it you should input all the transaction codes (T-codes). The DNA of SAP is T-codes. If you go to any place in SAP, most likely you have to enter a T-code. That will bring you to the right area. When we scope out an SAP project, the key starts with the list of T-codes. The key is to build out that BPH in SAP and associate all the T-codes in different areas.

With that T-code, you actually have all the documentation, functional specification, technical specification, all of the documentation and mapping associated at each level in your BPH along with your T-code. Not only that, you should have all your security IDs and metrics associated with each level at the BPH and T-codes, and all the flows and requirements all tied together, and of course the development, the code.

So, your Solution Manager should be the system record of development. The best practice is to always implement your SAP initial implementation with Solution Manager. So by the time you go live, you've already done all that. That’s the first bucket.

The second bucket is HP Tool Suite. We'll start with HP ALM Test Management Tool. It allows you to input your testing requirements, and they flow through the requirement to a test. If you’re using Business Process Testing (BPT), then you should flow through to the component in BPT, and flow through the test case module. Then, you flow through to the test plan, test lab and flow through to the defects. Everything is well integrated and connected.

Your Solution Manager should be the system record of development.

And then there is something we call an adapter. It’s a Solution Manager and HP ALM adapter. It enables Solution Manager and HP ALM to talk. You have to configure that adapter between Solution Manager and ALM. This is able to bring your hierarchy, your BPH in Solution Manager, and all the related assets, including the T-codes, over to the requirement model in HP ALM.

So if you have your Solution Manager straightened out, whatever you bring over to ALM, that's already your scope. It tells you what T-codes is in scope to test. By the way, in SAP it's often a headache that each T-code can do many, many things, especially if you're heavily customized.

So a T-code is not enough. You have to go down to a granular level of getting the variants. What are the typical scenarios or typical testing variants it has? Then, you can create that variance in the Solution Manager in the BPH. Then, it's going to flow through to the Requirement module in HP ALM and list out all your T-codes' possible variants.

Then, based on that, you start scoping out your testing assets. What are the components, test cases, or whatever you have to write. You put them in a BPT or you put them in your test case model. Then you link the requirement over. So you already have your test coverage. Then, you flow through a test case, flow through your execution in test lab, flow through to defects, and then it all ties back together.

And where does automation come in play? That's the bucket after HP ALM. So, UFT today is still the primary tool people use to automate. In the SAP space, SAP actually has its own. It's called, Test Acceleration and Optimization (TAO). That’s also leveraging UFT. That's the foundation to create a specific SAP automation, but either is fine. If you already have UFT, you really could start today.

Back and forth

So, the automation comes in place. This is very interesting. This is how it goes back and forth. For example, you already transported something to production and you want to check if anything slipped through the cracks? Is all the testing coverage there?

There's something called Solution Document Assistant. From the Solution Manager side, you can actually read from EarlyWatch reports to see what T codes are actually being used in your Production system today. After something is transported over into Prod, you can re-run it again to see what are the net new T-codes in the production system. Then, you can compare that. So there's a process.

Then you can see what are the net new ones from the BPH and flow through that to your HP QC or HP ALM, and see whether we have coverage for that. If not, here’s your scope for net new manual and automated testing.

I have yet to see a company that’s very good with documentation, especially with SAP.

Then, you keep building that regression and you eventually will get a library. That’s how you flow through back and forth. There is also something called Business Process Change Analyzer (BPCA). That already comes free with Solution Manager. You just have to configure it.

It allows you to load whatever you want to change in production into the buffer. So, before you actually transfer the code into production, you'll be able to know what area it impacts. It goes into the core level. So, it allows you to do targeted regression as well. We talked about Solution Manager. We talked about ALM. We talked about UFT. Then, there is LoadRunner, the performance center, the load testing, the performance testing, stress testing, etc., and this all goes into the same picture.

The ideal solution is that you can flow through your content in Solution Manager to HP ALM and you can enable automation for all tests together -- and all those performance, stress, whatever, testing -- in one end-to-end flow and you're able to build that regression library. You're able to build that technical testing library. And you're able to build that library and Solution Manager and maintain them at same time.

Gardner: So the technology is really powerful, but it's incumbent on the users to go through those steps of configuring, integrating, creating the diligence of the libraries and then building on that.

I'd like to go up to the business-level discussion. When you go to your boss's boss, can you explain to them what they're going to get as a value for having gone through this? It's one thing to do it because it's the right thing to do and it's got super efficient benefits, but that needs to translate into dollars and cents and business metrics. So what do you tell them you get at that business level when they do this properly?

Business takes notice

Shen: Very good question, because this exercise we did can be applied to any other companies. It's at the level that business really takes notice. One common challenge is that when you on-board somebody, do they have the proper documentation to ramp it up?

I yet have to see a company that’s very good with documentation, especially with SAP, where is that list of scope of all the T-codes that are today in production we use? What are the functional specs? What are the technical specs? Where is the field map? Where are the flows? You have to have that documentation in order to ramp somebody up or what typically ends up happening is that you hire somebody and you have to take other team members for a few weeks to ramp the person up.

Instead of putting them on the project to deliver right away, start writing the code, start configuring SAP, or whatever, they can’t start until few months later. How do you  accelerate that process? You build everything up with Solution Manager, you build everything up in HP ALM, you build everything up in your QTP and UFT and everything.

So this way, the person will come in, they can go to Solution Manager and look at all the T-codes and scope, look at all the updated T-codes, updated business areas, look at updated functional specs, understand what the company’s application does and what's the logic and what's configuration. Then, the person can easily go to HP ALM and figure out, the testing scenarios, how people test, how they use application, and what should be the expected behavior of the application.

Point one is that you can really speed up the hiring process and the knowledge transfer process for your new personnel. A more important application of this is on projects. Whether SAP or not, companies usually use very high-end products, because you have to constantly draw out new applications, new releases, and new features based on market conditions and based on business needs.

Testing is the most labor-intensive and painstaking process and probably one of the most expensive areas in any project delivery.

When a project starts, a very common challenge is the documentation of existing functionality? How can you identify what to build? If you have nothing, I can guarantee you that you'll spend a few weeks of the entire project team trying to figure out current status.

Again, with the library and Solution Manager, the regression testing suite, the automated suite in HP ALM and UFT, and all of that, you can get that on day one. It's going to shorten the project time. It's going to accelerate the project time with good quality.

The other thing is that a project is so important that anything in the project is very necessary. When you actually figure out your status quo, you start building.

Testing is the most labor-intensive and painstaking process and probably one of the most expensive areas in any project delivery. How do you accelerate that? Without existing regression library, documented test scenarios, and even automated existing regression libraries, you have to invent everything from scratch.

By the way, that involves figuring out the scope, the testing scope that involves writing the test case from scratch, building all the parameters, and building all the data. That takes a lot of time. If you already have an existing library, that’s going to shorten your lifecycle a lot.

So all this translates into dollar saving plus better coverage and faster delivery, which is key for business. By the way, when you have all this set in place, you're able to catch a lot more defects before it goes to production. I saw study that said it's about 10 times more expensive if you catch a defect in production. So the earlier you catch it, the better.

Security confidence

Gardner:  Right, of course. It also strikes me that doing this will allow you to have better security confidence, governance risk and compliance benefits, and auditability when that kicks in. In a banking environment, of course, that’s really important.

Shen: Absolutely. The HP ALM tool allows the complete audit trail for the testing aspect of it. Not at this current company, but on other projects, usually an auditor comes in and they ask for access to HP QC. They look at HP ALM, auto test cases, who executed, the recorded results, and defects, that’s what auditors look for.

Reduce post-production issues by 80% by building better apps.  

Learn Seven Best Practices for Business-Ready Applications

with a free white paper.

Gardner: Cindy, what is it that’s of interest to you here at HP Discover in terms of what comes next in HP's tool, seeing as they're quite important to you? Also, are you looking for anything in the HP-SAP relationship moving forward?

Shen: I love that question. Sometimes, I feel very lonely in this niche field. SAP is a big beast. HP-SAP integration is part of what they do, but it's not what they market. The good thing is that most SAP clients have HP ALM. It's a very necessary toolset for both HP and SAP to continue to evolve and support.

It's a niche market. There are only a handful of people in the world that can do this from end to end properly. HP has many other products. So, you're looking at a small circle of SAP end clients who are using HP toolsets, who need to know how to properly configure and run this efficiently and properly. Sometimes I feel very lonely, overlapping the circle of HP and SAP.

The good thing is that most SAP clients have HP ALM. It's a very necessary toolset for both HP and SAP to continue to evolve and support.

That’s why Discover is very important to me. It feels like a homecoming, just because here I'll actually speak to the project managers and experts on HP ALM sprinter, the integration, and the HP adapter. So I know what the future releases are. I know what's coming down the line, and I know the configuration I might have to change in the future.

The other really good of part, which I'm passionate about, having doing enough projects, is that I've helped clients, and there's always this common set of questions and challenges. It took me a couple of years to figure these out. There are many, many people out there in the same boat as I was years back, and I love to share my experience, expertise, and knowledge with the end clients.

They're the ones managing and creating their end-to-end testing. They're the ones facing all these challenges. I love to share with them what the best practices are, how to structure things correctly, so that you don’t have to suffer down the road. It really takes expertise to make it right. That’s what I love to share.

As far as the ecosystem of HP and SAP. I'd like to see them integrate more tightly. I'd like to see them engage more with the end-user community, so that we can definitely share the lessons and share the experience with end user more.

Also, I know all the vendors in the space. Basically, the vendors in the space are very niche and most of them come from SAP and HP backgrounds. So I keep running into people I know. My vendors keep running to people they know, and it's that community that’s very critical to enable success for the end user and for the business.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Cindy Shen  Dana Gardner  Home Trust Company  HP  HP ALM  HPDiscover  Interarbor Solutions  SAP  Solution Manager 

Share |
PermalinkComments (0)
 

ITSM adoption forces a streamlined IT operations culture at Desjardins, paves the way to cloud

Posted By Dana L Gardner, Thursday, October 09, 2014

Our next innovation case study interview highlights how Desjardins Group in Montréal is improving their IT operations through an advanced IT services management (ITSM) approach.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more, BriefingsDirect sat down with Trung Quach, ITSM Manager at Desjardins in Québec, at the recent HP Discover conference in Las Vegas. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: First tell us a little bit about your organization, you have a large network of credit unions.

Quach: It’s more like cooperative banking. We are around 50,000 people across Québec, and we've started moving into both Canada and the US.

Gain better control over help desk quality and impact.  

Learn how to make your help desk more relevant

with a free white paper.

Gardner: Tell us a little bit about your IT organization, the size, how many people, how many datacenters? What sort of IT organization do you have?

Quach: We're around 2,500 and counting. We're mainly based in Montréal and Lévis, which is near Québec City. Most of them are in Montréal, but some technical people are in Lévis. 

Gardner: Tell us about your role. What are you doing there as ITSM manager?

The ITIL process

Quach: I joined Desjardins last year in the ITSM leader position. This is more about the process, the ITIL process and everything that's invloved with the tool, as well as to support those overall processes.

Gardner: Tell us why ITSM has become important to you. What were some of the challenges, some of the requirements? What was the environment you were in that required you to adopt better ITSM principles?

Quach

Quach: A couple of years ago, when they merged 10-plus silos of IT into one big group, Desjardins needed to centralize the process, put best practice in place, to be more efficient and competitive -- and to give a higher value to the business.

Gardner: What, in particular, were issues that cropped up as a result of that decentralization? Was this poor performance, too much cost, too many manual processes, all of the above?

Quach: We had a lot of manual processes, and a lot of tools. To be able to measure the performance of a team, you need to use the same process and the same tools, and then measure yourself on it. You need to optimize the way you do it, so that you can provide better IT services.

Gardner: What have been some of the results of your movement toward ITSM? What sort of benefits have you realized as a result?

Quach: We had many of them. Some were financial, but the most important thing, I think, is the services quality and the availability of those services. So one indicator is a reduction in major incidents of 30 percent for the last two years.

Gardner: What is it about your use of ITSM that has led to that significant reduction in incidents? How does that translate?

Quach: We put our new problem management approach to work as well with the problem incident processes. When we open tickets, we can take care of the incidents in a coordinated way at an enterprise level. So the impact is everywhere. We can now advise the line of businesses, follow up with the incident, and close the incident rapidly. We follow up with any problems, and then we fix the real issues so that they don’t come back.

Gardner: Have you used this to translate back to any applications development, or custom development in your organization? Or is this more on the operations side strictly?

Better support

Quach: We started all of this on the operations side. But then we started last year on the development side, too. They're involved in our process slowly, and that’s going to soon get better, so we can support the full IT lifecycle better.

Gardner: Tell us about HP Discover. What's of interest to you? Have you been looking at what HP has been doing with their tools? What's of most importance to you in terms of what they do with their technology?

Quach: I can tell you how important it is for us. Last year we didn't go to HP Discover. This year, around eight in my team and the architecture team are here. That shows you how important it is.

Now we spread out. A lot of my team members went to explore tools and everything else that HP has to offer -- and HP has a lot of offer. We went to learn about the cloud, as well as big data. It all works together. That’s why it was important for us to come here. ITSM is the main reason we're here, but I want to make sure that everything works together, because the IT processes touch everything.

Gain better control over help desk quality and impact.  

Learn how to make your help desk more relevant

with a free white paper.

Gardner: I've talked to a number of organizations, Trung, and they've mentioned that before they feel comfortable moving into more cloud activities, and before they feel comfortable adopting big data, analytics platforms, they want to make sure they have everything else in order. So ITSM is an important step for them to then go to larger, more complex undertakings. Is that your philosophy as well?

Quach: Yes. There are two ways to do this. You use that technology to force yourself to be disciplined, or you discipline yourself. ITSM is one way to do it. You force yourself to work in a certain manner, a streamlined manner, and then you can go to the cloud. It's easier that way.

Gardner: Then, of course, you also have standardization in culture, in organization, not just technology, but the people and the process, and that can be very powerful.

Quach: If asked me about cloud -- and I have done this with another company -- in a 30-minute interview about cloud, I would use 29 minutes to talk about technology, people, and process relationship.

Gardner: How about the future of IT? Any thoughts about or the big picture of where technology is going? Even as we face larger data volumes, perhaps more complexity, and mobile applications, what are your thoughts about how we solve some of those issues in the big picture?

Time to market

Quach: IT more and more is going to have a challenge for meeting the speed demanded for improved time to market. But to do that, you need processes, technology, and of course, people. So the client, the business, is going to ask us to be faster. That’s why we'll need to go in that cloud. But to go in the cloud, we need to master our IT services, and then go in the cloud. If not, it would be like not going to the cloud and not having that agility. We would not be competitive.

Gain better control over help desk quality and impact.  

Learn how to make your help desk more relevant

with a free white paper.

Gardner: Looking back, now that you have gone through an ITSM advancement, for those who are just beginning, what are some thoughts that you could share with them?

Quach: In an ITSM project, it's very hard to manage change. I'm talking about the people change, not the change-management technology process. Most of the time, you put that in place and say that everybody has to work with it. If I would redo it, I would bring more people to understand the latest ITSM science and processes, and explain why in five or 10 years, it's going to really help us.

You always have to be close to your clients. Even if they are IT, they are your client or partner.

After that, we'll put in the project, but we'll follow them and train them every year. ITSM is a never-ending story. You always have to be close to your clients. Even if they are IT, they are your client or partners. You need to coach them, to make sure they understand why they're doing this. Sometimes it’s a bit longer to get it right at the beginning, but it’s all worth it at the end.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  Desjardins Group  HP  HP DISCOVER  Interarbor Solutions  ITSM  Trung Quach 

Share |
PermalinkComments (0)
 

MIT Media Lab computing director details the virtues of cloud for agility and disaster recovery

Posted By Dana L Gardner, Tuesday, October 07, 2014

The next BriefingsDirect innovator case study interview focuses on the MIT Media Lab in Cambridge, Mass., and how they're exploring the use of cloud and hybrid cloud to enjoy such use benefits as IT speed, agility and robust, three-tier disaster recovery (DR)

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how the MIT Media Lab is exploiting cloud computing, we’re joined by Michail Bletsas, research scientist and Director of Computing at the MIT Media Lab. The discussion, at the recent VMworld 2014 Conference in San Francisco, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about the MIT Media Lab and how it manages its own compute requirements.

Bletsas: The organization is one of the many independent research labs within MIT. MIT is organized in departments, which do the academic teaching, and research labs, which carry out the research.

http://web.media.mit.edu/~mbletsas/

Bletsas

The Media Lab is a unique place within MIT. We deviate from the normal academic research lab in the sense that a lot of our funding comes from member companies, and it comes in a non-direct fashion. Companies become members of the lab, and then we get the freedom to do whatever we think is best.

We try to explore the future. We try to look at what our digital life will look like 10 years out, or more. We're not an applied research lab in the sense that we're not looking at what's going to happen two or three years from now. We're not looking at short-term future products. We're looking at major changes 15 years out.

I run the group that takes care of the computing infrastructure for the lab and, unlike a normal IT department, we're kind of heavy on computing. We use computers as our medium. The Media Lab is all about human expression, which is the reason for the name and computers are one of the main means of expression right now. We're much heavier than other departments in how many devices you're going to see. We're on a pretty complex network and we run a very dynamic environment.

Major piece

A lot has changed in our environment in recent years. I've been there for almost 20 years. We started with very exotic stuff. These days, you still build exotic stuff, but you're using commodity components. VMware, for us, is a major piece of this strategy because it allows us a more efficient utilization of our resources and allows us to control a little bit the server proliferation that we experienced and that everybody has experienced.

We normally have about 350 people in the lab, distributed among staff, faculty members, graduate students, and undergraduate students, as well as affiliates from the various member companies. There is usually a one-to-five correspondence between virtual machines (VMs), physical computers, and devices, but there are at least 5 to 10 IPs per person on our network. You can imagine that having a platform that allows us to easily deploy resources in a very dynamic and quick fashion is very important to us.

We run a relatively small operation for the size of the scope of our domain. What's very important to us is to have tools that allow us to perform advanced functions with a relatively short learning curve. We don’t like long learning curves, because we just don’t have the resources and we just do too many things.

You are going to see functionality in our group that is usually only present in groups that are 10 times our size. Each person has to do too many things, and we like to focus on technologies that allow us to perform very advanced functions with little learning. I think we've been pretty successful with that.

We really need to interact with our infrastructure on a much shorter cycle than the average operation.

Gardner: How have you created a data center that’s responsive, but also protects your property?

Bletsas: Unlike most people, we tend to have our resources concentrated close to us. We really need to interact with our infrastructure on a much shorter cycle than the average operation. We've been fortunate enough that we have multiple, small data centers concentrated close to where our researchers are. Having something on the other side of the city, the state, or the country doesn’t really work in an environment that’s as dynamic as we are.

We also have to support a much larger community that consists of our alumni or collaborators. If you look at our user database right now, it’s something in the order of 3,500, as opposed to 350. It’s a very dynamic in that it changes month to month. The important attributes of an environment like this is that we can’t have too many restrictions. We don’t have an approved list of equipment like you see in a normal corporate IT environment.

Our modus operandi is that if you bring it to us, we’ll make it work. If you need to use a specific piece of equipment in your research, we’ll try to figure out how to integrate it into your workflow and into what we have in there. We don’t tell people what to use. We just help them use whatever they bring to us.

In that respect, we need a flexible virtualization platform that doesn’t impose too many restrictions on what operating systems you use or what the configuration of the VMs are. That’s why we find that solutions, like general public cloud, for us are only applicable to a small part of our research. Pretty much every VM that we run is different than the one next to it. 

Flexibility is very important to us. Having a robust platform is very, very important, because you have too many parameters changing and very little control of what's going on. Most importantly, we need a very solid, consistent management interface to that. For us, that’s one of the main benefits of the vSphere VMware environment that we’re on.

Public or hybrid

Gardner: What about taking advantage of cloud, public cloud, and hybrid cloud to some degree, perhaps for disaster recovery (DR) or for backup failover. What's the rationale, even in your unique situation, for using a public or hybrid cloud?

Bletsas: We use hybrid cloud right now that’s three-tiered. MIT has a very large campus. It has extensive digital infrastructure running our operations across the board. We also have facilities that are either all the way across campus or across the river in a large co-location facility in downtown Boston and we take advantage of that for first-level DR.

A solution like the vCloud Air allows us to look at a real disaster scenario, where something really catastrophic happens at the campus, and we use it to keep certain critical databases, including all the access tools around them, in a farther-away location.

It’s a second level for us. We have our own VMware infrastructure and then we can migrate loads to our central organization. They're a much larger organization that takes care of all the administrative computing and general infrastructure at MIT at their own data centers across campus. We can also go a few states away to vCloud Air [and migrate our workloads there in an emergency].

We know that remote events are remote, until they happen, and sometimes they do.

So it’s a very seamless transition using the same tools. The important attribute here is that, if you have an operation that small, 10 people having to deal with such a complex set of resources, you can't do that unless you have a consistent user interface that allows you to migrate those workloads using tools that you already know and you're familiar with.

We couldn’t do it with another solution, because the learning curve would be too hard. We know that remote events are remote, until they happen, and sometimes they do. This gives us, with minimum effort, the ability to deal with that eventuality without having to invest too much in learning a whole set of tools, a whole set of new APIs to be able to migrate.

We use public cloud services also. We use spot instances if we need a high compute load and for very specialized projects. But usually we don’t put persistent loads or critical loads on resources over which we don’t have much control. We like to exert as much control as possible.

Gardner: It sounds like you're essentially taking metadata and configuration data, the things that will be important to spin back up an operation should there be some unfortunate occurrence, and putting that into that public cloud, the vCloud Air public cloud. Perhaps it's DR-as-a-service, but only a slice of DR, not the entire data. Is that correct?

Small set of databases

Bletsas: Yes. Not the entire organization. We run our operations out of a small set of databases that tend to drive a lot of our websites. A lot of our internal systems drive our CRM operation. They drive our events management. And there is a lot of knowledge embedded in those databases.

It's lucky for us, because we're not such a big operation. We're relatively small, so you can include everything, including all the methods and the programs that you need to access and manipulate that data within a small set of VMs. You don’t normally use them out of those VMs, but you can keep them packaged in a way that in a DR scenario, you can easily get access to them.

Fortunately, we've been doing that for a very long time because we started having them as complete containers. As the systems scaled out, we tended to migrate certain functions, but we kept the basic functionality together just in case we have to recover from something.

We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization

In the older days, we didn’t have that multi-tiered cloud in place. All we had was backups in remote data centers. If something happened, you had to go in there and find out some unused hardware that was similar to what you had, restore your backup, etc.

Now, because most of MIT's administrative systems run under VMware virtualization, finding that capacity is a very simple proposition in a data center across campus. With vCloud Air, we can find that capacity in a data center across the state or somewhere else.

Gardner: For organizations that are intrigued by this tiered approach to DR, did you decide which part of those tiers would go in which place? Did you do that manually? Is there a part of the management infrastructure in the VMware suite that allowed you to do that? How did you slice and dice the tiers for this proposition of vCloud Air holding a certain part of the data?

Bletsas: We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization. We occasionally use vSphere’s monitoring infrastructure. Sometimes it reveals to us certain usage patterns that we were not aware of. That’s one of the main benefits that we found there.

We realized that certain databases were used more than we thought. Just looking at those access patterns told us, “Look, maybe you should replicate this." It doesn’t cost much to replicate this across campus and then maybe we should look into pushing it even further out.

It is a combination of having a visibility and nice dashboards that reveal patterns of activity that you might not be aware of even in an environment that's not as large as ours.

Gardner: At VMworld 2014, there was quite a bit of news, particularly in the vCloud Air arena. What intrigues you?

Standard building blocks

Bletsas: We like the move toward standardization of building blocks. That’s a good thing overall, because it allows you to scale out relatively quickly with a minor investment in learning a new system. That’s the most important trend out there for us. As I've said, we're a small operation. We need to standardize as much as possible, while at the same time, expanding the spectrum of services. So how do you do that? It’s not a very clear proposition.

The other thing that is of great interest to us is network virtualization. MIT is in a very peculiar situation compared to the rest of the world, in the sense that we have no shortage of IP addresses. Unlike most corporations where they expose a very small sliver of their systems to the outside world and everything happens on the back-end, our systems are mostly exposed out there to the public internet.

We don’t run very extensive firewalls. We're a knowledge dissemination and distribution organization and we don’t have many things to hide. We operate in a different way than most corporations. That shows also with networking. Our network looks like nothing like what you see in the corporate world. The ability to move whole sets of IPs around our domain, which is rather large and we have full control over, is a very important thing for us.

It allows for much faster DR. We can do DR using the same IPs across the town right now because our domain of control is large enough. That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that. That is important.

That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that.

The other trend that is also important is storage virtualization and storage tiering and you see that with all the vendors down in the exhibit space. Again, it allows you to match the application profile much easier to what resources you have. For a rather small group like ours, which can't afford to have all of its disk storage and very high-end systems, having a little bit of expensive flash storage, and then a lot of cheap storage, is the way for us to go.

The layers that have been recently added to VMware, both on the network side and the storage side help us achieve that in a very cost-efficient way.

For us, experimentation is the most important thing. Spinning out a large number of VMs to do a specific experiment is very valuable and being able to commandeer resources across campus and across data centers is a necessary requirement for something like an environment like this. Flexibility is what we get out of that and agility and speed of operations.

In the older days, you had to go and procure hardware and switch hardware around. Now, we rarely go into our data centers. We used to live in our data centers. We go there from time to time but not as often as we used to do, and that’s very liberating. It’s also very liberating for people like me because it allows me to do my work anywhere.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Tags:  BriefingsDirect  cloud computing  Dana Gardner  disaster recovery  Interarbor Solutions  Michail Bletsas  MIT Media Lab  vCloud AIr  virtual machines  virtualization  VMWare  VMWorld  vSphere 

Share |
PermalinkComments (0)
 

Cloud services brokerages add needed elements of trust and oversight to complex cloud deals

Posted By Dana L Gardner, Wednesday, October 01, 2014

Our BriefingsDirect discussion today focuses on an essential aspect of helping businesses make the best use of cloud computing.

We're examining the role and value of cloud services brokers with an emphasis on small to medium-sized businesses (SMBs), regional businesses, and government, and looking for attaining the best results from a specialist cloud service brokerage role within these different types of organizations.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

No two businesses have identical needs, and so specialized requirements need to be factored into the use of often commodity-type cloud services. An intermediary brokerage can help companies and government agencies make the best use of commodity and targeted IaaS clouds, and not fall prey to replacing an on-premises integration problem with a cloud complexity problem.

To learn more about the role and value of the specialist cloud services brokerage, we're joined by Todd Lyle, President of Duncan, LLC, a cloud services brokerage in Ohio, and Kevin Jackson, the Founder and CEO of GovCloud Network in Northern Virginia. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How do we get regular companies to effectively start using these new cloud services?

Lyle: Through education. That’s our first step. The technology is clearly here, the three of us will agree. It's been here for quite some time now. The beauty of it is that we're able to extract bits and pieces for bundles, much like you get from your cell phone or your cable TV folks. You can pull those together through a cloud services brokerage.

Lyle

So brokerage firms will go out and deal with the cloud services providers like Amazon, Rackspace, Dell, and those types of organizations. They bring the strengths of each of those organizations together and bundle them. Then, the consumer gets that on a monthly basis. It's non-CAPEX, meaning there is no capital expenditure.

You're renting these services. So you can expand and contract as necessary. To liken this to a utility environment, utility organizations that do electric and do power, you flip the switch on or turn the faucet on and off. It’s a metered service.

Learn more about Todd D. Lyle's book, 

Grounding the Cloud: Basics and Brokerages, 

at groundingthecloud.org.

That's where you're going to get the largest return on your collective investment when you switch from a traditional IT environment on-premises, or even a private cloud, to the public cloud and the utility that this brings.

Government agencies

Gardner: Kevin you're involved more with government agencies. They've been using IT for an awfully long time. How is the adjustment to cloud models for them? Is it easier, is it better, or is it just a different type of approach, and therefore requires only adjustment?

Jackson: Thank you for bringing that up. Yes, I've been focused on providing advanced IT to the federal market and Fortune 500 businesses for quite a while. The advent of cloud computing and cloud services brokerages is a double-edged sword. At once, it provides a much greater agility with respect to the ability to leverage information technology.

Jackson

But, at the same time, it brings a much greater amount of responsibility, because cloud service providers have a broad range of capabilities. That broad range has to be matched against the range of requirements within an enterprise, and that drives a change in the management style of IT professionals.

You're going more from your implementation skills to a management of IT skills. This is a great transition across IT, and is something that cloud services brokerages can really aid. [See Jackson's recent blog on brokerages.]

Gardner: Todd, it sounds as if we're moving this from an implementation and a technology skill set into more of a procurement, governance, contracts, and creating the right service-level agreements (SLAs). These are, I think, new skills for many businesses. How is that coaching aspect of a cloud service’s brokerage coming out in the market? Is that something you are seeing a lot of demand for?

Lyle: It’s customer service, plain and simple. We hear about it all the time, but we also pass it off all the time. You have to be accessible. If you're a 69-year-old business owner and embracing a technology from that demographic, it’s going to be different than if you are 23 years old, different in the approach that you take with that person.

As we all get more tenured, we'll see more adaptability to new technologies in a workplace, but that’s a while out. That's the 35-and-younger crowd. If you go to 35-and-above, it's what Kevin mentioned -- changing the culture, changing the way things are procured within those cultures, and also centralizing command. That’s where the brokerage or the exchange comes into place for this. [See Lyle's video on cloud brokerages.]

Change management is a key aspect of being able to have an organization take on change as a normal aspect of their business.

Gardner: One of the things that’s interesting to me is that a lot of companies are now looking at this as not just as a way of switching from one type of IT, say a server under a desk, to another type of IT, a server in a cloud.

It’s forcing companies to reevaluate how they do business and think of themselves as a new process-management function, regardless of where the services reside. This also requires more than just how to write a contract. It's really how to do business transformation.

Does that play into the cloud services brokerage? Do you find yourselves coaching companies on business management?

Jackson: Absolutely. One of the things cloud services is bringing to the forefront is the rapidity of change. We're going from an environment where organizations expect a homogenous IT platform to where hybrid IT is really the norm. Change management is a key aspect of being able to have an organization take on change as a normal aspect of their business.

This is also driving business models. The more effective business models today are taking advantage of the parallel and global nature of cloud computing. This requires experience, and cloud services brokerages have the experience of dealing with different providers, different technologies, and different business models. This is where they provide a tremendous amount of value.

Different types of services

Gardner: Todd, this notion of being a change agent also raises the notion that we're not just talking about one type of cloud service. We're talking about software as a service (SaaS), bringing communications applications like e-mail and calendar into a web or mobile environment. We're talking about platform as a service (PaaS), if you're doing development and DevOps. We're talking about even some analytics nowadays, as people try to think about how to use big data and business intelligence (BI) in the cloud.

Tell me a bit more about why being a change agent across these different models -- and not just a cloud implementer or integrator -- raises the value of this cloud service brokerage role?

Lyle: It’s a holistic approach. I've been talking to my team lately about being the Dale Carnegie of the cloud, hence the specialist cloud services brokerage, because it really does come down to personalities.

In a book that I've recently written called Grounding the Cloud, Basics and Brokerages, I talk about the human element. That's the personalities, expectations, and abilities of your workforce, not only your present workforce but your future workforce, which we discussed just a moment ago, as far as demographics were concerned.

It's constant change. Kevin said it, using a different term, but that's the world we live in. Some schools are doing this, where they're adding this to their MBA programs. It is a common set of skills that you must have, and it's managing personalities more than you're managing technology, in my opinion.

It's about the human element, our personalities, and how to make these changes so that the companies actually can speed up.

Gardner: Tell me a bit more about this book, Todd, it’s called Grounding the Cloud. When is it available and how can people learn more about it?

Lyle: It’s available now on Amazon, and they can find out more at www.groundingthecloud.org. This is a layman’s introduction to cloud computing, and so it helps business men and women get a better understanding of the cloud -- and how they could best maximize their time and their money, as it associates to their IT needs.

Gardner: Does the book get into this concept of the specialist cloud services brokerage (SCSB), as opposed to just a general brokerage, and getting at what's the difference?

Lyle: That’s an excellent question, Dana. There are a lot of perceptions, you have one as well, of what a cloud services brokerage is. But, at the end of the day -- and we've been talking about this in the entire discussion -- it's about the human element, our personalities, and how to make these changes so that the companies actually can speed up.

We discuss it here in the "flyover country," in Ohio. We meet in the book with Cleveland State University. We meet with Allen Black Enterprises, and then even with a small landscaping company to demonstrate how the cloud is being applied from six and seven users, all the way up to 25,000 users. And we're doing it here in the Midwest, where things tend to take a couple of years to change.

User advocate

Gardner: How is a cloud services brokerage different from a systems integrator? It seems there's some commonality. But you are not just a channel, or reseller, you are really as much an advocate for the user.

Lyle: A specialist cloud services brokerage is going to be more like Underwriters Laboratories (UL). It’s going to go out, fielding all the different cloud flavors that are available, pick what they feel is best, and bring it together in a bundle. Then, the SCSB works with the entity to adapt to the culture and the change that's going to have to occur and the education within their particular businesses, as opposed to a very high-level vertical, where some things are just pushed out at an enterprise level.

Jackson: I see this cloud services brokerage and specialist cloud services brokerage as the new-age system integrator, because there are additional capabilities that are offered.

For example, you need a trusted third-party to monitor and report on adherence to SLAs. The provider is not going to do that. That’s a role for your cloud services brokerage. Also you need to maintain viable options for alternative cloud-service providers. The cloud services brokerage will identify your options and give you choices, should you need the change. A specialist cloud services brokerage also helps to ensure portability of your business process and data from one cloud service provider to another.

Management of change is more than a single aspect within the organization. It’s how to adapt with constant change and make sure that your enterprise has options and doesn't get locked into a single vendor.

Lyle: It comes to the point, Kevin, of building for constant change. You're exactly right.

Learn more about Todd D. Lyle's book, 

Grounding the Cloud: Basics and Brokerages, 

at groundingthecloud.org

Gardner: You raise an interesting point too, Kevin, that one shouldn’t get lulled into thinking that they can just make a move to the cloud, and it will all be done. This is going to be a constant set of moves, a journey, and you're going to want to avail yourself of the cloud services marketplace that’s emerging.

We're seeing prices driven down. We're seeing competition among commodity-level cloud services. I expect we'll see other kinds of market forces at work. You want to be agile and be able to take advantage of that in your total cost of computing.

Jackson: There's a broad range of providers in the marketplace, and that range expands daily. Similarly, there's a large range of requirements within any enterprise of any size. Brokers act as matchmakers, avoiding common mistakes, and also help the organizations, the SMBs in particular, implement best practices in their adoption of this new model.

Gardner: Also, when you have a brokerage as your advocate, they're keeping their eye on the cloud marketplace, so that you can keep your eye on your business and your vertical, too. Therefore, you're going to have somebody to tip you off when things change and they will be on the vanguard for deals. Is that something that comes up in your book, Todd, of the public service brokerage being an educated expert in a field where the business really wants to stick to its knitting?

Primary goal

Lyle: Absolutely. That’s the primary goal, both at a strategic level, when you're deciding what products to use -- the Rackspaces, the Microsofts, the RightSignatures, etc. -- all the way down to the tactical one of the daily operation. When I leave the company, how soon can we lock Todd out? How soon can we lock him down or lock him out? It becomes a security issue at a very granular level. Because it's metered, you turn it off, you turn Todd off, you save his data, and put it someplace else.

That’s a role that, requires command and control and oversight, and that's a responsibility. You're part butler. You're looking out for the day-to-day, the minute issues. Then you get up to a very high level. You're like UL. You're keeping an eye on everything that’s occurring. UL comes to mind because they do things that are tactile and those things that you can't touch, and definitely the cloud is something you can’t touch.

Jackson: Actually, I believe it represents the embracing of a cooperative model of my consumers of this information technology, but embracing with open eyes. This is particularly of interest within the federal marketplace, because federal procurement executives have to stop their adversarial attitude toward industry. Cloud services brokerages and specialist cloud services brokerages sit at the same the table with these consumers.

This is particularly of interest within the federal marketplace, because federal procurement executives have to stop their adversarial attitude towards industry.

Lyle: Kevin, your point is very well taken. I'll go one step further. We were talking up and down the scales, strategic down to the daily operations. One of the challenges that we have to overcome is the signatories, the senior executives, that make these decisions. They're in a different age group and they're used to doing things a certain way.

That being said, getting legislation to be changed at the federal level, directives being pushed down, will make the difference, because they do know how to take orders. I know I'm speaking frankly, but what's going to have to occur for us to see some significant change within the next five years is being told how the procurement process is going to happen.

You're taking the feather; I'm taking the stick, but it’s going to take both of those to accomplish that task at the federal level.

Gardner: We know that Duncan, LLC is a specialized cloud services brokerage. Kevin, tell us a little bit about the GovCloud Network. What is your organization, and how do you align with cloud brokerages?

Jackson: GovCloud Network is a specialty consultancy that helps organizations modify or change their mission and business processes in order to take advantage of this new style of system integrator.

Earlier, I said that the key to transition in a cloud is adopting and adapting to the parallel nature and a global nature of cloud computing. This requires a second look at your existing business processes and your existing mission processes to do things in different ways. That's what GovCloud Network allows. It helps you redesign your business and mission processes for this constant change and this new model.

Notion of governance

Gardner: I'd like to go back to this notion of governance. It seems to me, Todd, that when you have different parts of your company procuring cloud services, sometimes this is referred to as shadow IT. They're not doing it in concert, through a gatekeeper like a cloud broker. Not only is there a potential redundancy of efforts in labor and work in process, but there is this governance and security risk, because one hand doesn’t know what the other hand is doing.

Let's address this issue about better security from better governance by having a common brokerage gatekeeper, rather than having different aspects of your company out buying and using cloud services independently.

Lyle: We're your trusted adviser. We’re also very much a trusted member of your team when you bring us into the fold. We provide oversight. We're big brother, if you want to look at it that way, but big brother is important when you are dealing with your business and your business resources. You don’t want to leave a window open at night. You certainly don't want to leave your network open.

There's a lot going on in today's world, a lot of transition, the NSA and everything we worry about. It's important to have somebody providing command and control. We don’t sit there and stare at a monitor all day. We use systems that watch this, but we can tell when there's an increase or decrease out of the norm of activities within your organization.

We're big brother, if you want to look at it that way, but big brother is important when you are dealing with your business and your business resources.

It really doesn't matter how big or how small, there are systems that allow us to monitor this and give a heads up. If you're part of a leadership team, you’d be notified that again Todd Lyle has left an open window. But if you don't know that Todd even has the window, then that’s even a bigger concern. That comes down to the leadership again -- how you want to manage your entity.

We all want to feel free to make decisions, but there are too many benefits available to us, transparent benefits, as Kevin put it, to using the cloud and hiding in plain sight, maximizing e-mail at 100,000 plus users. Those are all good things but they require oversight.

It's almost like an aviation model, where you have your ground control and your flight crew. Everybody on that team is providing oversight to the other. Ultimately, you have your control tower that's watching that, and the control tower, both in the air and on the ground, is your cloud services brokerage.

Jackson: It’s important to understand that cloud computing is the industrialization of information technology. You're going from an age where the IT infrastructure is a hand-designed and built work of art to where your IT infrastructure is a highly automated assembly-line platform that requires real-time monitoring and metering. Your specialist cloud services brokerage actually helps you in that transition and operations within this highly automated environment.

Gardner: Todd, we spoke earlier about how we're moving from implementation to procurement. We've also talked about governance being important, SLAs, and managing a contract across variety of different organizations that are providing cloud type services. It seems to me that we're talking about financial types of relations.

So even the Federal Government can adopt cloud services brokerage and respond in a very quick and efficient and effective manner.

How does the cloud services brokerage help the financial people in a company. Maybe it's an individual who wears many hats, but you could think of them as akin to a chief financial officer, even though that might not be their title?

What is it that we are doing with the cloud services brokerage that is of a special interest and value to the financial people? Is it unified billing or is it one throat to choke? How does that work?

Lyle: Both, and then some. Ultimately it's unified billing and unified management from daily operations. It's helping people understand that we're moving from a capitalized expense, the server, the software, things that are tactile that we are used to touching. We're used to being able to count them and we like to see our stuff.

So it's transitioning and letting go, especially for the people who watch the money. We have a fiduciary responsibility to the organizations that we work for. Part of that is communicating, educating, and helping the CFO-type person understand the transition not only from the CAPEX to the OPEX, because they get that, but also how you're going to correlate it to productivity.

It's letting them know to be patient. It's going to take a couple months for your metering to level up. We have some statistics and we can read into that. It's holding their hand, helping them out. That's a very big deal as far as that's concerned.

Gardner: Let's start to think about how to get started. Obviously, every company is different. They're going to be at a different place in terms of maturity, in their own IT, never mind the transition to cloud types of activities. Would you recommend the book as a starting point? Do you have some other materials or references? How do you help that education process get going. I'm thinking about organizations that are really at the very beginning?

Gateway cloud

Lyle: We've created a gateway cloud in our book, not to confuse the cloud story. Ultimately, we have to take in consideration our economy, the world economy today. We're still very slow to move forward.

There are some activities occurring that are forcing us to make change. Our contracts may be running out. Software like XP is no longer supported. So we may be forced into making a change. That's when it's time to engage a cloud services brokerage or a specialist cloud services brokerage.

Go out and buy the book. It's available on Amazon. It gives you a breakdown, and you can do an assessment of your organization as it currently is and it will help you map your network. Then, it will help you reach out to a cloud services brokerage, if you are so inclined, through points of interest for request for proposal or request for information.

The fun part is, it gives you a recipe using Rackspace, Jungle Disk, and gotomeeting.com, where you get to build a baby cloud. Then, you can go out and play with it.

This is written for the layperson. I've been told it’s entertaining, which is the most important part, because you’re going to read it then.

You want to begin with three points: file sharing, remote access, and email. You can be the lighthouse or you can be a dry-cleaners, but every organization needs file sharing, remote access, and email. We open-sourced this recipe or what we call the industrial bundle for small businesses.

It's not daunting. We’ve got some time yet, but I would encourage you to get a handle on where your infrastructure is today, digest that information, go out and play with the gateway cloud that we've created, and reach out to us if you are so inclined.

Learn more about Todd D. Lyle's book, 

Grounding the Cloud: Basics and Brokerages, 

at groundingthecloud.org

We’d love for you to use one of our organizations, but ultimately know that there are people out there to help you. This book was written for us, not for the technical person. It is not in geek speak. This is written for the layperson. I've been told it’s entertaining, which is the most important part, because you’re going to read it then.

Jackson: I would urge SMBs to take the plunge. Cloud can be scary to some, but there is very little risk and there is much to gain for any SMB. The using, leveraging, taking advantage of the cloud gateway that Todd mentioned is a very good, low risk, and high reward path towards the cloud.

Gardner: I would agree with both of what you all said. The notion of a proof of concept and dipping your toe in. You don't have to buy it all at once, but find an area of your company where you’re going to be forced to make a change anyway and then to your point, Kevin, do it now. Take the plunge earlier rather than later.

Jackson: Before you're forced.

Large changes

Gardner: Before you’re forced, but you want to look at a tactical benefit and where to work toward strategic benefit, but there is going to be some really large changes happening in what these cloud providers can do in a fairly short amount of time.

We're moving from discrete apps into the entire desktop, so a full PC experience as a service. That’s going to be very attractive to people. They're going to need to make some changes to get there. But rather than thinking about services discreetly, more and more of what they're looking for is going to be coming as the entire IT services experience, and more analytics capabilities mixed into that. So I am glad to hear you both explaining how to do it, managed at a proof-of-concept level. But I would say do it sooner rather than later.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Duncan, LLC.

You may also be interested in:

Tags:  BriefingsDirect  cloud computing  cloud services  cloud services brokerage  Dana Gardner  Duncan LLC  Interarbor Solutions  Kevin Jackson  Todd Lyle 

Share |
PermalinkComments (0)
 

University of New Mexico delivers efficient IT services by centralizing on secure, managed cloud automation

Posted By Dana L Gardner, Wednesday, September 24, 2014

The latest BriefingsDirect discussion focuses on one of the toughest balancing acts in seeking the best of cloud computing benefits. This balance comes from obtaining the proper degree of centralization or "common good" for infrastructure efficiency, while preserving a sufficient culture of decentralization for agility, innovation, and departmental-level control.

The requirement for empowering centralization is no more evident than in a large university setting, where support and consensus must be preserved among such constituencies as faculty, staff, students, and researchers -- across an expansive educational community.

But the typical IT model does not support localized agility when it takes weeks to spin up a server, if online services lack automation, or if manual processes hold back efficient ongoing IT operations. Too much IT infrastructure redundancy also means weak security, high costs, lack of agility, and slow upgrades.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We're joined by an IT executive from the University of New Mexico (UNM) to learn more about moving to a streamlined and automated private cloud model to gain a common good benefit, while maintaining a vibrant and reassured culture of innovation. We're also joined by a VMware executive to learn more about the latest ways to manage cloud architectures and processes to attain the best of cloud efficiencies, while empowering improved services delivery and process agility.

They are: Brian Pietrewicz, Director of Computing Platforms at the University of New Mexico in Albuquerque, and Kurt Milne, Director of Product Marketing in the Management Business Unit at VMware. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about your IT organization at the university and how you've been able to do change, but at the same time not alienate your users, who are, I imagine, used to having things their way.

Pietrewicz: At the University of New Mexico, it's a highly decentralized organization. In most cases, the departments are responsible for their own IT. In most cases, that means they don't have the resources to effectively run IT, in particular, things like data centers, servers, storage, disaster recovery (DR), and backups.

Pietrewicz

What we're doing to improve the process is providing infrastructure as a service (IaaS) to those groups so that they don’t have to worry about the heavy lifting of the infrastructure pieces that I mentioned before. They can stay focused on their core mission, whether that’s physics, or psychology, or who knows what.

So we offer IaaS. We're running a VMware stack, and we're also running vCloud Automation Center (vCAC). We've deployed the Self-Service Portal. We give departments, faculty members, or departmental IT folks the ability to go into the portal and deploy their own machines at will.

Then, they are administrators of that machine. They also have additional management features through the vCAC console so that they can effectively do whatever they need to do with the server, but not have to worry about any of the underlying infrastructure.

Gardner: That sounds like the best of both worlds. In a sense, you're a service provider in the organization, getting the benefits of centralization and efficiency, but allowing them to still have a lot of hands-on control, which I assume that they want.

Pietrewicz: Correct. The other part is the agility, the ability for them to be able to react quickly, to consume infrastructure on demand as they need it, and have the benefit of all the things that virtualization brings with redundant infrastructure, lower cost of ownership, and those sorts of things.

New expectations

Milne: It’s an interesting time to be in the IT space, because there's this new set of expectations being imposed on IT by the business to be strategic, to quickly adopt new technology, and boost innovation.

Milne

At the same time, IT still has the full set of responsibilities they've always had -- to stay secure, to avoid legacy debt, to drive operational excellence so they maintain uptime, security, and quality of service for transactional systems and business-critical systems.

It’s really an interesting paradox. How do you do these two things that are seemingly mutually exclusive -- go fast, but at the same time, stay in control?

Brian’s approach is what I call it "push button IT," where you give folks a button to push and they get what they need when they want it. But if IT controls the button and they control what happens when the user pushes the button, IT is able to maintain control. It’s really the best of both worlds.

Gardner: Brian, tell us a little bit about how long you have been there and what it was like before you began this journey?

Pietrewicz: I've been at UNM for about two-and-a-half years, and I can tell you the number one complaint. We suffer from a lot of the same problems that other large IT shops have, with funding and things like that. But the primary issue that we had when I walked in the door was customers being upset because we didn't have clearly-defined services, and we had sold these services to these customers.

We had sold virtual machines (VMs) with database backups, and all kinds of interesting things, with no service-level agreements (SLAs), no processes, nothing wrapped around it. The delivery of these services was completely inconsistent.

So I started out down the new path. The first thing that we did was to make the services more consistent. Just to give you an example, deploying a virtual machine for a customer. The way that it was when I got here was that a ticket came into the service desk. It went to a single technician, and then whichever technician got that ticket figured out their own way of getting that machine deployed.

At the same time, IT still has the full set of responsibilities they've always had -- to stay secure, to avoid legacy debt, to drive operational excellence.

As the next step in that process, we went through and, instead of just having it being done a different way by whoever received the ticket, we identified all the steps associated. In looking at all the steps associated, we identified over a 100 manual steps that went though six different completely separate groups inside of our organization.

Those included operating system, storage, virtualization, security, and networking for firewall changes. In all those various groups that deploy their individual piece of that puzzle, it was being done differently every time. Our deployment times were taking as long as three weeks. You can imagine how painful that is when it takes 20 minutes to spin up a VM -- but it was taking three weeks to deploy it to a customer.

We identified all the steps and defined the process very, very clearly; exactly what it takes to deploy a VM. The interesting thing that came out of that was that it gave us the content necessary to be able to start developing a true service description and an SLA.

Ticketing system

It also made it so that it was consistent. We did a few things after we did the process development. We generated workflows within our ticketing system, so that all that happened was a ticket was put in and then it auto-generated all the necessary tickets to deploy the VM, so it happened in a very consistent way.

That dropped the deployment time from three weeks down to about three days, because it still had to go through certain approval process and things like that with security.

For the next step we said, "Okay, how can we do this better?" We looked at all of those steps that we put in place and found that they were all repetitive, manual steps that could be easily automated. So enters VMware vCAC.

We took all the steps, after we had them clearly defined, and we automated all the steps that we could. We couldn’t automate all of them, for example, sending information to our billing system to bill the customer back. From vCAC we shoot an email over to our ticketing system, that generates a ticket. Then, the billing information is still entered manually, and we are working on an upgrade to that.

UNM is approximately 45,000 faculty, staff, and students. We have about 100 either departments or affiliates, and today, we're running about 660 VMs for our organization. For central IT, we're between 98 percent and 99 percent virtualized.

When I first got here, the services were not defined and the processes were not defined. Since then, we have clearly defined the processes, narrowed those down into the very specific processes and tasks that had to be done, and then we automated. We're going through the process of automating every step in that process.

ITIL is very challenging to implement, but it's extremely helpful, because it gives you a framework to work within.

Now, we have a thing we call Lobo Cloud -- our mascot is the Lobo. Customers can now go online and deploy a machine within 20 minutes. So basically everything has transformed from extremely inconsistent service and taking as long as three weeks to deploy, to now it being the equivalent going into McDonald’s and ordering a Big Mac. It’s extremely consistent and down from three weeks to 20 minutes.

Gardner: I assume Brian that you've adopted some industry-standard methods, perhaps a framework, that gave you some guidance on this. How does your service delivery policy adhere to an industry standard like ITIL?

Pietrewicz: That’s what we use. We follow ITIL and we're at varying levels of maturity with it. ITIL is very challenging to implement, but it's extremely helpful, because it gives you a framework to work within, to start narrowing down these process, defining services, setting SLAs. It gives you a good overarching framework to work within.

The absolute hardest part of all of this is implementing the ITIL framework, identifying your processes, identifying what your service is, and identifying your SLA. Walking through all of that is exponentially harder than putting the technology in place.

Gardner: It seems to me that not only are you going to get faster servers, response times, and automation, but there are some other significant benefits to this approach. I'm thinking about security, disaster recovery (DR), the ability to budget better through an OPEX model, and then ultimately reduce total costs.

Is it too soon or have some of these other benefits that I have heard about typically when people move to a more automated cloud approach? How is that working for you?

Less expensive

Pietrewicz: We don’t really have good statistics on it. For the folks that had machines sitting underneath their desks and in closets before, we don’t have a lot of the statistics to know exactly the cost and the time they were spending on that.

Anybody who works with virtualization quickly learns that once you hit a certain size, it becomes significantly less expensive. You become far more agile and you get a huge number of benefits. Some of them are things that you mentioned -- the deployment time, DR, the ability to automate, the taking advantage of economies of scale.

Instead of deploying one $10,000 server per application, you're now loading up 70 machines on a $15,000 server. All of those things come into play. But we really don’t have good statistics, because we didn’t really have any good processes before we started.

What’s interesting now is that our next step in the process is to automate our billing process. Once we do that, we're going to have everything from our virtual infrastructure deployed into our billing system and either a charge-back or a show-back methodology.

The same kind of tools and processes that can automate the delivery of those services can also automate tearing down those services when they're done.

So we'll have complete detailed costs of all of our infrastructure associated with every department and every application that is using our service. We'll be able to really show the total cost of ownership (TCO).

Milne: Brian, it sounds like you're on a path that a lot of our customers are on. What we see typically is that there is a change in consumption behavior when your customers know that they can get IaaS on demand. They stop hoarding resources. The same kind of tools and processes that can automate the delivery of those services can also automate tearing down those services when they're done.

Virtualization by itself increases capacity utilization quite a bit, but then going to this kind of services delivery, service consumption for infrastructure, actually further increases utilization and drives down over-provisioning.

Adding that cost transparency to that service will further change your consumers' behavior and the ability to get it when you need it and only pay for what you use drives down the amount of resources that you have to keep in your data center.

Pietrewicz: Absolutely. It’s amazing what happens when you have to pay for something and it’s very visible.

Milne: I always feel that if IT is free that really changes the supply and demand equation, if you study economics. People don’t know what to do with free. They typically take too much.

Economic behavior

Pietrewicz: Right. This really starts driving basic economic and social behavior into the equation in IT. It’s a difficult thing for organizations to get their head around, and they're sort of getting it here at the university. It’s not completely in place. The way that we look at it is as a, "We'll build it, and they'll come" kind of thing.

Most folks have figured out that they can really save that money. Instead of going out and buying a $10,000 server, they can buy a $1,000 VM from us that does the exact same thing. If they don’t want it any more, they can turn it off and not pay any more. All of those things come into play.

Another piece on that is the university was experimenting with a thing called reliability centered maintenance (RCM), which is a budgeting process that works toward the bottom line of a particular organization. That means that people have to be transparent and make clear decisions about where they're spending their money. That's also starting to drive adoption.

Ancillary benefits

Gardner: We talked about some of the ancillary benefits of your approach, but there are some direct benefits when you go to a cloud model, which gives you more options. You can have your private cloud. You can look to public cloud and other hosting models, and then you can start to see a path or a vision towards a hybrid cloud environment, where you might actually move workloads around based on the right infrastructure approach for the right job at the right time. Any thoughts about where your clouds goals are vis-à-vis the hybrid potential?

Pietrewicz: We have a few things in play that we're actively working. Today, we have people using various cloud providers. The interesting part about that they're just paying for it with a credit card out of their department, and the university doesn’t have any clear way of knowing exactly what’s out there. We don’t really have any good security mechanisms in place for determining whether there's any sensitive data being stored out there inadvertently.

We're working with a lot of the cloud providers that we are already spending money with and we are already working with to develop consolidated accounts. One, we can save money through economies of scale. And two, we can get some visibility into what folks are actually using the cloud for. And then three, IT would like to act as an adviser to be able to point out for the various cloud providers that are out there -- this particular provider is good at functionality or this particular provider is good at security.

We envision setting up hybrid cloud services with those public cloud providers to be able to move the workloads back and forth when necessary.

The first step is to corral the use of public cloud for UNM and create an escorting process to the cloud. The second step is going to be a hybrid cloud that we'll set up from our private cloud here on site. We envision setting up hybrid cloud services with those public cloud providers to be able to move the workloads back and forth when necessary.

The other major benefit that we very much look forward to is being able to do DR in the cloud and taking advantage of the ability to replicate data and then spin up systems as you need them, rather than having a couple of million dollars in equipment sitting, waiting, and hoping you never use it. Things that you have to refresh every four years so that you have a viable DR plan.

Gardner: Is vCloud Automation Center something that will be useful in moving to this hybrid model? The one button to push, as it were, on the private cloud, will that become a one button to push in the hybrid model as well?

Pietrewicz: It will. I mentioned those various cloud service providers. Most of them are compatible with the vCloud Connector, so that you can simply just connect up that hybrid cloud service and with a little bit of work, be able to massage your portal.

We can have a menu option of public cloud providers through our portal that they could just select and say that they want to get a vCHS, Amazon, or Terremark, and then potentially move workloads back and forth. So vCAC and vCloud Connector are all at the center of it.

The other interesting piece that we're working on and going to try to figure out as part of this is that we really want to start looking into NSX and/or VIX to be able to provide very clear security boundaries, basically multi-tenancy, and then potentially be able to move those multi-tenant environments back and forth in the cloud or extend them from public to private cloud as well.

Software-defined networking

Gardner: Brian, you mentioned multi-tenancy earlier, and of course, there is a lot going on with software-defined data center, networking, and storage. What is it about it that’s interesting to you and why is this a priority for you, software-defined networking (SDN), for example?

Pietrewicz: SDN is the next sort of step in being able to truly automate your IaaS and your virtual environment. If you want to be able to dynamically deploy systems and have them be in a SAN box that is multi-tenant by customer, you really need to have an SDN-type solution, or at least that’s extremely helpful to do that.

One of the things that we are looking at next is to be able to implement something like NSX, so that we can deploy the equivalent of what’s a virtual wire, a multi-tenant environment, to individual customers, so that they can only see their stuff and can’t see their neighbors and vice versa.

The key is the ability to orchestrate that on demand and not have to deal with the legacy VLAN and firewall kind of issues that you have with the legacy environment.

Gardner: It’s interesting how a lot of these major trends -- service delivery, cloud, private cloud, DR, and SDN -- are interrelated. It’s a complex bundle, but the payoffs, when you do this inclusively, are pretty impressive.

From VMware’s perspective, that kind of network virtualization capability is critical for our hybrid cloud service.

Pietrewicz: Whenever you get to the point of abstracting things to the software level, you provide the ability to automate. When you have the ability to automate, you get tremendous flexibility. That sometimes can be an issue in and of itself, just making decisions on how you want to do something. But along with that flexibility, you get the ability to automate just about anything that you want or need to be able to do.

The second piece to that is that we're really excited about figuring out, when we build the hybrid cloud model, how we might be able to extend those tenants into the cloud, either as active running workloads or in a DR model, so that the multi-tenancy is retained.

Milne: From VMware’s perspective, that kind of network virtualization capability is critical for our hybrid cloud service. It’s that capability that NSX provides that creates that seamless experience from your data center out to the hybrid cloud.

As you said, Brian, that kind of network configuration, allocation, and reallocation of IP addresses, when you are moving things from one data center to another, is not something you want to do on a manual basis. So NSX is a key component of our hybrid cloud vision. It’s something that lot of the other cloud providers just don’t have.

Pietrewicz: I see it as the next frontier in IT. I think that when SDN starts taking off, it’s going to be a game changer in ways that we are not even recognizing yet, and that’s one example. Moving a workload from one network to another network is extremely powerful.

Cloud broker

Gardner: Kurt, this sounds as if not only is Brian transitioning into being a service provider to his constituencies, but now he's also becoming a cloud broker. Is this typical of what you're seeing in the market as well?

Milne: It is. Some of our customers will take a step to try to get their arms around shadow IT, users going around IT, to just offer that provisioning option through the IT portal. So it’s like, "You're using Amazon? That’s fine. We can help you do that." So putting a button in the service catalog deploys the kind of work that they've been doing in a public cloud like Amazon, but it has to come through IT. Then, IT is aware of it.

There's a saying I like. It’s called the "cloud boomerang." A lot of times, the IT customers will put thing out in the public cloud, but like a boomerang, it seems to always come back. The customer wants to integrate it with an existing system or they realize that they have to support it up in the cloud. A lot of times, those rogue deployments make their way back to the IT organization. So putting an Amazon service in the vCAC portal and not changing anything else is a nice first step in corralling that.

Now, we're taking that next step and combining a lot of those capabilities into a single platform.

Pietrewicz: That is exactly what we're seeing. At a university, because there isn’t really governance, it’s more like build a good service and hope they come. We take the approach of trying to enable it. We want to make it very transparent and say that they can use Amazon or vCHS, but there's a better way to do it. If you do it through the portal, you may be able to move those workloads back and forth.

We are actually seeing exactly what you mentioned, Kurt. Folks are reaching the limitations of using some of the cloud providers, because they need to get access to data back here at UNM and are actually doing the boomerang approach. They started out there and now they're migrating their machines into our IaaS so that they can get access to the data that they need.

Gardner: Kurt, we heard some very interesting things at VMworld recently around the cloud-management platform. Why don’t you tell us a little bit about that and how that fits into what we've been discussing in terms of this ongoing maturity and evolution that a large organization like the University of New Mexico is well into?

Milne: We recently announced the vRealizeSuite, which is a cloud management platform. So we're moving our product management strategy to a common platform.

Over the years, VMware has either built or acquired quite a few different management products. We've combined those products into a number of suites, like our automation, operations, and our business management suites. Now, we're taking that next step and combining a lot of those capabilities into a single platform.

There are a couple of guiding ideas there. We see in organizations like Brian’s is that the lines between the automated provisioning of those workloads automation, provisioning those workloads, and the ongoing operations and maintenance and support of those workloads, is really starting to blur.

So you have automation tasks that might happen when you're doing a support call. Maybe you want to provision some more resources, and there are operations tasks like checking system health that you might want to do as a step in an automation routine.

Shared services

Our product strategy change is to move toward a shared-services model, similar to a service-oriented architecture. The different services that are underlying our management products would be executable through a tool like vCAC, through a command line interface, or through like a REST API. There's kind of a mix-and-match opportunity to execute those services in different ways.

To build that platform with the shared service model on top, we need to start re-architecting some of our products in the back-end, so that we have a common orchestration engine, a common DR backup and a common policy engine. You don’t want one tool to undo the work that another tool did yesterday. You can’t have conflicting robots going out and doing automated tasks.

The general idea is to try to further consolidate these different management functions into a single platform. The overall goal is to try to help organizations maintain control, but then also increase flexibility and speed for their business users.

Gardner: Brian, is that something that you think is going to be on your radar? Is management so distributed now that you're looking for a more consolidated approach that’s inclusive?

The overall goal is to try to help organizations maintain control, but then also increase flexibility and speed for their business users.

Pietrewicz: That would be wonderful. We're doing things many different ways. If you take the example of orchestration, we are using Orchestrator, PowerShell, Perl, and starting to experiment with Puppet.

It would be really good if you could have one standardized way that you approach orchestration, as an example, and how that might tie into all the other pieces for back-end management, rather than handling it several different ways. As Kurt was mentioning, one part starts to step on another part. Having that be consolidated and consistent would be a huge value.

Milne: The other part of the strategy is also to make that work across environments. So the same tools and services would be available if you are provisioning up to Amazon or to your private cloud or hybrid cloud service, and even different hypervisors.

We're fully aware of the heterogeneous nature of the modern data center. So we're shifting to try to create that kind of powerful common management stack with that unified management experience across all of the environment. It’s kind of a nirvana. When we talk to people, they say that’s exactly what they want. So our vision is to kind of march towards delivering on that.

Gardner: Kurt, I am trying to recall from VMworld whether this was offered on-premises, as a service from a cloud, or some combination?

Service offerings

Milne: That’s the other interesting part of this. We're starting to go down the path of offering a number of our management products as a service. For example, at VMworld, we announced the availability of a beta for our vCAC product as a software as a service (SaaS), so you can without installing any software get a service portal, get that workflow and policy engine, and deploy infrastructure services across different environments.

We'll be rolling out betas for our other products in subsequent quarters over the next year or so. Then potentially we could have the SaaS services interact with and combine with the services that are available through the products that are installed on-premise. Our goal is to get these out there and then understand what the best use cases are, but that kind of mix and match is part of the vision.

Gardner: It’s interesting. We might have a reverse boomerang when it comes to the management of all of this. Does that sound appealing Brian? Is that something you would look to as a cloud service, comprehensive management?

Our goal is to get these out there and then understand what the best use cases are, but that kind of mix and match is part of the vision.

Pietrewicz: Absolutely, but it’s largely dependent on return on investment (ROI). It’s that balance of, when you get to a certain level in an IT shop, it’s sometimes cheaper to do things in-house than it is to outsource it, and sometimes not. You have to do the analysis on the ROI on what makes more sense to bring it in or to use a SaaS.

As an example, we completely outsourced all of our email, because it’s a lot of work. It's very simple and easy to do as a SaaS solution, but it’s a lot more work to do in-house. It’s definitely something that we would look into.

Milne: In a mid-sized organization that might have 300 different applications that the IT organization supports, maybe 50 of those are IT tools. Already we've seen progress with companies like ServiceNow that have a SaaS-based service desk. It makes sense to start to turn more of those management products into a SaaS delivery model.

Gardner: Brian, any thoughts about others who are starting to move in your direction, perhaps their own Lobo Cloud, their own portal rationalizing these services, being able to measure them better. What in 20/20 hindsight do you have that you could recommend for them as they go about this? Any learned lessons you could share?

Process orientation

Pietrewicz: The biggest lesson learned, without a doubt, is the focus on the process orientation, the ITIL model. The technology is really not that hard. It’s determining what your service is, what are you trying to deliver, and then how do you build that into a consistently delivered service, complete with SLAs and service descriptions that meet the customer needs. That's the most difficult part.

The technical folks can definitely sling the technology. That doesn’t seem to be that big of a deal. The partners and providers do a very good job of putting together products that make it happen, but the hard part is defining the processes and defining the services and making sure that they are meeting the customer needs.

Gardner: Kurt, any thoughts in reaction to what Brian said in terms of getting started on the right path around cloud rationalization of your IT organization?

Milne: One of the things that I've seen is a lot of organizations go through this process that Brian has described, trying to clearly define their services and figure out which parts of those services they're going to automate.

The hard part is defining the processes and defining the services and making sure that they are meeting the customer needs.

A lot of organizations start that service definition effort from an inside-out perspective, get a bunch of IT guys together, and try to define what you do on a daily basis in a service. That's hard.

The easier approach is just to go talk to your customers and users and ask, "If I were going to give you a button you could click to get what you need, what would you put behind the button?" Then, you define your services more from an outside-in perspective. It seems to be where companies get anyway and you just shortcut a lot of teeth gnashing and internal meetings when you do it that way.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Tags:  BriefingsDirect  Brien Pietrewicz  Cloud  Dana Gardner  HP VirtualSystem for VMware  IaaS  Interarbor Solutions  Kurt Milne  software-defined networking  UNM  virtualization 

Share |
PermalinkComments (0)
 

The Open Group panel: Internet of things poses massive opportunities and obstacles

Posted By Dana L Gardner, Monday, September 22, 2014

What The Open Group refers to as Open Platform 3.0 encompasses the combined impacts of cloud, big data, mobile, and social. But to each of these now we can add a new cresting wave of complexity and scale as we consider the rapid explosion of new devices, sensors, and myriad endpoints that will be connected using internet protocols, standards and architectural frameworks.

This so-called Internet of Things means more data, more cloud connectivity and management, and an additional tier of “things” that are going to be part of the mobile edge -- and extending that mobile edge ever deeper into even our own bodies.

Yet the Internet of Things is more than the “things” – it means a higher order of software platforms. For example, if we are going to operate data centers with new dexterity thanks to software-defined networking (SDN) and storage (SDS) -- indeed the entire data center being software-defined (SDDC) -- then why not a software-defined automobile, or factory floor, or hospital operating room -- or even a software-defined city block or neighborhood?

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

And so how does this all actually work? Does it easily spin out of control? Or does it remain under proper management and governance? Do we have unknown unknowns about what to expect with this new level of complexity, scale, and volume of input devices?

To help answer these questions, The Open Group and BriefingsDirect recently assembled a distinguished panel at The Open Group Boston Conference 2014 to explore the practical implications and limits of the Internet of Things.

The panelist are: Said Tabet, Chief Technology Officer for Governance, Risk and Compliance Strategy at EMC, and a primary representative to the Industrial Internet Consortium; Penelope Gordon, Emerging Technology Strategist at 1Plug Corporation; Jean-Francois Barsoum, Senior Managing Consultant for Smarter Cities, Water and Transportation at IBM, and Dave Lounsbury, Chief Technical Officer at The Open Group. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jean-Francois, we have heard about this notion of "cities as platforms," and I think the public sector might offer us some opportunity to look at what is going to happen with the Internet of Things, and then extrapolate from that to understand what might happen in the private sector.

Hypothetically, the public sector has a lot to gain. It doesn't have to go through the same confines of a commercial market development, profit motive, and that sort of thing. Tell us a little bit about what the opportunity is in the public sector for smart cities.

Barsoum: It's immense. The first thing I want to do is link to something that Marshall Van Alstyne (Professor at Boston University and Researcher at MIT) had talked about, because I was thinking about his way of approaching platforms and thinking about how cities represent an example of that.

Barsoum

You don't have customers; you have citizens. Cities are starting to see themselves as platforms, as ways to communicate with their customers, their citizens, to get information from them and to communicate back to them. But the complexity with cities is that as a good a platform as they could be, they're relatively rigid. They're legislated into existence and what they're responsible for is written into law. It's not really a market.

Chris Harding (Forum Director of The Open Group Open Platform 3.0) earlier mentioned, for example, water and traffic management. Cities could benefit greatly by managing traffic a lot better.

Part of the issue is that you might have a state or provincial government that looks after highways. You might have the central part of the city that looks after arterial networks. You might have a borough that would look after residential streets, and these different platforms end up not talking to each other.

They gather their own data. They put in their own widgets to collect information that concerns them, but do not necessarily share with their neighbor. One of the conditions that Marshall said would favor the emergence of a platform had to do with how much overlap there would be in your constituents and your customers. In this case, there's perfect overlap. It's the same citizen, but they have to carry an Android and an iPhone, despite the fact it is not the best way of dealing with the situation.

The complexities are proportional to the amount of benefit you could get if you could solve them.

More hurdles

Gardner: More hurdles, more interoperability issues, and when you say commensurate, you're saying that the opportunity is huge, but the hurdles are huge and we're not quite sure how this is going to unfold.

Barsoum: That's right.

Gardner: Let's go to an area where the opportunity outstrips the challenge, manufacturing. Said, what is the opportunity for the software-defined factory floor for recognizing huge efficiencies and applying algorithmic benefits to how management occurs across domains of supply-chain, distribution, and logistics. It seems to me that this is a no-brainer. It's such an opportunity that the solution must be found.

Tabet: When it comes to manufacturing, the opportunities are probably much bigger. It's where we can see a lot of progress that has already been done and still work is going on. There are two ways to look at it.

Tabet

One is the internal side of it, where you have improvements of business processes. For example, similar to what Jean-Francois said, in a lot of the larger companies that have factories all around the world, you'll see such improvements on a factory base level. You still have those silos at that level.

Now with this new technology, with this connectedness, those improvements are going to be made across factories, and there's a learning aspect to it in terms of trying to manage that data. In fact, they do a better job. We still have to deal with interoperability, of course, and additional issues that could be jurisdictional, etc.

However, there is that learning that allows them to improve their processes across factories. Maintenance is one of them, as well as creating new products, and connecting better with their customers. We can see a lot of examples in the marketplace. I won't mention names, but there are lots of them out there with the large manufacturers.

Gardner: We've had just-in-time manufacturing and lean processes for quite some time, trying to compress the supply chain and distribution networks, but these haven't necessarily been done through public networks, the internet, or standardized approaches.

But if we're to benefit, we're going to need to be able to be platform companies, not just product companies. How do you go from being a proprietary set of manufacturing protocols and approaches to this wider, standardized interoperability architecture?

Tabet: That's a very good question, because now we're talking about that connection to the customer. With the airline and the jet engine manufacturer, for example, when the plane lands and there has been some monitoring of the activity during the whole flight, at that moment, they'll get that data made available. There could be improvements and maybe solutions available as soon as the plane lands.

Interoperability

That requires interoperability. It requires Platform 3.0 for example. If you don't have open platforms, then you'll deal with the same hurdles in terms of proprietary technologies and integration in a silo-based manner.

Gardner: Penelope, you've been writing about the obstacles to decision-making that might become apparent as big data becomes more prolific and people try to capture all the data about all the processes and analyze it. That's a little bit of a departure from the way we've made decisions in organizations, public and private, in the past.

Of course, one of the bigger tenets of Internet of Things is all this great data that will be available to us from so many different points. Is there a conundrum of some sort? Is there an unknown obstacle for how we, as organizations and individuals, can deal with that data? Is this going to be chaos, or is this going to be all the promises many organizations have led us to believe around big data in the Internet of Things?

Gordon: It's something that has just been accelerated. This is not a new problem in terms of the decision-making styles not matching the inputs that are being provided into the decision-making process.

Gordon

Former US President Bill Clinton was known for delaying making decisions. He's a head-type decision-maker and so he would always want more data and more data. That just gets into a never-ending loop, because as people collect data for him, there is always more data that you can collect, particularly on the quantitative side. Whereas, if it is distilled down and presented very succinctly and then balanced with the qualitative, that allows intuition to come to fore, and you can make optimal decisions in that fashion.

Conversely, if you have someone who is a heart-type or gut-type decision-maker and you present them with a lot of data, their first response is to ignore the data. It's just too much for them to take in. Then you end up completely going with whatever you feel is correct or whatever you have that instinct that it's the correct decision. If you're talking about strategic decisions, where you're making a decision that's going to influence your direction five years down the road, that could be a very wrong decision to make, a very expensive decision, and as you said, it could be chaos.

It just brings to mind to me Dr. Seuss’s The Cat in the Hat with Thing One and Thing Two. So, as we talk about the Internet of Things, we need to keep in mind that we need to have some sort of structure that we are tying this back to and understanding what are we trying to do with these things.

If you have someone who is a heart-type or gut-type decision-maker and you present them with a lot of data, their first response is to ignore the data.

Gardner: Openness is important, and governance is essential. Then, we can start moving toward higher-order business platform benefits. But, so far, our panel has been a little bit cynical. We've heard that the opportunity and the challenges are commensurate in the public sector and that in manufacturing we're moving into a whole new area of interoperability, when we think about reaching out to customers and having a boundary that is managed between internal processes and external communications.

And we've heard that an overload of data could become a very serious problem and that we might not get benefits from big data through the Internet of Things, but perhaps even stumble and have less quality of decisions.

So Dave Lounsbury of The Open Group, will the same level of standardization work? Do we need a new type of standards approach, a different type of framework, or is this a natural path and course what we have done in the past?

Different level

Lounsbury: We need to look at the problem at a different level than we institutionally think about an interoperability problem. Internet of Things is riding two very powerful waves, one of which is Moore's Law, that these sensors, actuators, and network get smaller and smaller. Now we can put Ethernet in a light switch right, a tag, or something like that.

Lounsbury

Also, Metcalfe's Law that says that the value of all this connectivity goes up with the square of the number of connected points, and that applies to both the connection of the things but more importantly the connection of the data.

The trouble is, as we have said, that there's so much data here. The question is how do you manage it and how do you keep control over it so that you actually get business value from it. That's going to require us to have this new concept of a platform to not only to aggregate, but to just connect the data, aggregate it, correlate it as you said, and present it in ways that people can make decisions however they want.

Also, because of the raw volume, we have to start thinking about machine agency. We have to think about the system actually making the routine decisions or giving advice to the humans who are actually doing it. Those are important parts of the solution beyond just a simple "How do we connect all the stuff together?"

Gardner: We might need a higher order of intelligence, now that we have reached this border of what we can do with our conventional approaches to data, information, and process.

Thinking about where this works best first in order to then understand where it might end up later, I was intrigued again this morning by Professor Van Alstyne. He mentioned that in healthcare, we should expect major battles, that there is a turf element to this, that the organization, entity or even commercial corporation that controls and manages certain types of information and access to that information might have some very serious platform benefits.

The question is how do you manage it and how do you keep control over it so that you actually get business value from it.

The openness element now is something to look at, and I'll come back to the public sector. Is there a degree of openness that we could legislate or regulate to require enough control to prevent the next generation of lock-in, which might not be to a platform to access to data information and endpoints? Where is it in the public sector that we might look to a leadership position to establish needed openness and not just interoperability.

Barsoum: I'm not even sure where to start answering that question. To take healthcare as an example, I certainly didn't write the bible on healthcare IT systems and if someone did write that, I think they really need to publish it quickly.

We have a single-payer system in Canada, and you would think that would be relatively easy to manage. There is one entity that manages paying the doctors, and everybody gets covered the same way. Therefore, the data should be easily shared among all the players and it should be easy for you to go from your doctor, to your oncologist, to whomever, and maybe to your pharmacy, so that everybody has access to this same information.

We don't have that and we're nowhere near having that. If I look to other areas in the public sector, areas where we're beginning to solve the problem are ones where we face a crisis, and so we need to address that crisis rapidly.

Possibility of improvement

In the transportation infrastructure, we're getting to that point where the infrastructure we have just doesn't meet the needs. There's a constraint in terms of money, and we can't put much more money into the structure. Then, there are new technologies that are coming in. Chris had talked about driverless cars earlier. They're essentially throwing a wrench into the works or may be offering the possibility of improvement.

On any given piece of infrastructure, you could fit twice as many driverless cars as cars with human drivers in them. Given that set of circumstances, the governments are going to find they have no choice but to share data in order to be able to manage those. Are there cases where we could go ahead of a crisis in order to manage it? I certainly hope so.

Gardner: How about allowing some of the natural forces of marketplaces, behavior, groups, maybe even chaos theory, where if sufficient openness is maintained there will be some kind of a pattern that will emerge? We need to let this go through its paces, but if we have artificial barriers, that might be thwarted or power could go to places that we would regret later.

Barsoum: I agree. People often focus on structure. So the governance doesn't work. We should find some way to change the governance of transportation. London has done a very good job of that. They've created something called Transport for London that manages everything related to transportation. It doesn't matter if it's taxis, bicycles, pedestrians, boats, cargo trains, or whatever, they manage it.

In the transportation infrastructure, we're getting to that point where the infrastructure we have just doesn't meet the needs.

You could do that, but it requires a lot of political effort. The other way to go about doing it is saying, "I'm not going to mess with the structures. I'm just going to require you to open and share all your data." So, you're creating a new environment where the governance, the structures, don't really matter so much anymore. Everybody shares the same data.

Gardner: Said, to the private sector example of manufacturing, you still want to have a global fabric of manufacturing capabilities. This is requiring many partners to work in concert, but with a vast new amount of data and new potential for efficiency.

How do you expect that openness will emerge in the manufacturing sector? How will interoperability play when you don't have to wait for legislation, but you do need to have cooperation and openness nonetheless?

Tabet: It comes back to the question you asked Dave about standards. I'll just give you some examples. For example, in the automotive industry, there have been some activities in Europe around specific standards for communication.

The Europeans came to the US and started to have discussions, and the Japanese have interest, as well as the Chinese. That shows, because there is a common interest in creating these new models from a business standpoint, that these challenges they have to be dealt with together.

Managing complexity

When we talk about the amounts of data, what we call now big data, and what we are going to see in about five years or so, you can't even imagine. How do we manage that complexity, which is multidimensional? We talked about this sort of platform and then further, that capability and the data that will be there. From that point of view, openness is the only way to go.

There's no way that we can stay away from it and still be able to work in silos in that new environment. There are lots of things that we take for granted today. I invite some of you to go back and read articles from 10 years ago that try to predict the future in technology in the 21st century. Look at your smart phones. Adoption is there, because the business models are there, and we can see that progress moving forward.

Collaboration is a must, because it is a multidimensional level. It's not just manufacturing like jet engines, car manufacturers, or agriculture, where you have very specific areas. They really they have to work with their customers and the customers of their customers.

Adoption is there, because the business models are there, and we can see that progress moving forward.

Gardner: Dave, I have a question for both you and Penelope. I've seen some instances where there has been a cooperative endeavor for accessing data, but then making it available as a service, whether it's an API, a data set, access to a data library, or even analytics applications set. The Ocean Observatories Initiative is one example, where it has created a sensor network across the oceans and have created data that then they make available.

Do you think we expect to see an intermediary organization level that gets between the sensors and the consumers or even controllers of the processes? Is there's a model inherent in that that we might look to -- something like that cooperative data structure that in some ways creates structure and governance, but also allows for freedom? It's sort of an entity that we don't have yet in many organizations or many ecosystems and that needs to evolve.

Lounsbury: We're already seeing that in the marketplace. If you look at the commercial and social Internet of Things area, we're starting to see intermediaries or brokers cropping up that will connect the silo of my android ecosystem to the ecosystem of package tracking or something like that. There are dozens and dozens of these cropping up.

In fact, you now see APIs even into a silo of what you might consider a proprietary system and what people are doing is to to build a layer on top of those APIs that intermediate the data.

This is happening on a point-to-point basis now, but you can easily see the path forward. That's going to expand to large amounts of data that people will share through a third party. I can see this being a whole new emerging market much as what Google did for search. You could see that happening for the Internet of Things.

Gardner: Penelope, do you have any thoughts about how that would work? Is there a mutually assured benefit that would allow people to want to participate and cooperate with that third entity? Should they have governance and rules about good practices, best practices for that intermediary organization? Any thoughts about how data can be managed in this sort of hierarchical model?

Nothing new

Gordon: First, I'll contradict it a little bit. To me, a lot of this is nothing new, particularly coming from a marketing strategy perspective, with business intelligence (BI). Having various types of intermediaries, who are not only collecting the data, but then doing what we call data hygiene, synthesis, and even correlation of the data has been around for a long time.

It was an interesting, when I looked at recent listing of the big-data companies, that some notable companies were excluded from that list -- companies like Nielsen. Nielsen's been collecting data for a long time. Harte-Hanks is another one that collects a tremendous amount of information and sells that to companies.

That leads into the another part of it that I think there's going to be. We're seeing an increasing amount of opportunity that involves taking public sources of data and then providing synthesis on it. What remains to be seen is how much of the output of that is going to be provided for “free”, as opposed to “fee”. We're going to see a lot more companies figuring out creative ways of extracting more value out of data and then charging directly for that, rather than using that as an indirect way of generating traffic.

Gardner: We've seen examples of how this has been in place. Does it scale and does the governance or lack of governance that might be in the market now sustain us through the transition into Platform 3.0 and the Internet of Things.

Having standards is going to increasingly become important, unless we really address a lot of the data illiteracy that we have.

Gordon: That aspect is the lead-on part of “you get what you pay for”. If you're using a free source of data, you don't have any guarantee that it is from authoritative sources of data. Often, what we're getting now is something somebody put it in a blog post, and then that will get referenced elsewhere, but there was nothing to go back to. It's the shaky supply chain for data.

You need to think about the data supply and that is where the governance comes in. Having standards is going to increasingly become important, unless we really address a lot of the data illiteracy that we have. A lot of people do not understand how to analyze data.

One aspect of that is a lot of people expect that we have to do full population surveys, as opposed representative sampling to get much more accurate and much more cost-effective collection of data. That's just one example, and we do need a lot more in governance and standards.

Gardner: What would you like to see changed most in order for the benefits and rewards of the Internet of Things to develop and overcome the drawbacks, the risks, the downside? What, in your opinion, would you like to see happen to make this a positive, rapid outcome? Let's start with you Jean-Francois.

Barsoum: There are things that I have seen cities start to do now. There are couple of examples: Philadelphia is one and Barcelona does this too. Rather than do the typical request for proposal (RFP), where they say, "This is the kind of solution we're looking for, and here are our parameters. Can l you tell us how much it is going to cost to build," they come to you with the problem and they say, "Here is the problem I want to fix. Here are my priorities, and you're at liberty to decide how best to fix the problem, but tell us how much that would cost."

If you do that and you combine it with access to the public data that is available -- if public sector opens up its data -- you end up with a very powerful combination that liberates a lot of creativity. You can create a lot of new business models. We need to see much more of that. That's where I would start.

More education

Tabet: I agree with Jean-Francois on that. What I'd like to add is that I think we need to push the relation a little further. We need more education, to your point earlier, around the data and the capabilities.

We need these platforms that we can leverage a little bit further with the analytics, with machine learning, and with all of these capabilities that are out there. We have to also remember, when we talk about the Internet of Things, it is things talking to each other.

So it is not human-machine communication. Machine-to-machine automation will be further than that, and we need more innovation and more work in this area, particularly more activity from the governments. We've seen that, but it is a little bit frail from that point of view right now.

Gardner: Dave Lounsbury, thoughts about what need to happen in order to keep this on the tracks?

Thank you for mentioning the machine-to-machine part, because there are plenty of projections that show that it's going to be the dominant form of Internet communication, probably within the next four years.

Lounsbury: We've touched on lot of them already. Thank you for mentioning the machine-to-machine part, because there are plenty of projections that show that it's going to be the dominant form of Internet communication, probably within the next four years.

So we need to start thinking of that and moving beyond our traditional models of humans talking through interfaces to set of services. We need to identify the building blocks of capability that you need to manage, not only the information flow and the skilled person that is going to produce it, but also how you manage the machine-to-machine interactions.

Gordon: I'd like to see not so much focus on data management, but focus on what is the data managing and helping us to do. Focusing on the machine-to-machine and the devices is great, but it should be not on the devices or on the machines… it should be on what can they accomplish by communicating; what can you accomplish with the devices and then have a reverse engineer from that.

Gardner: Let's go to some questions from the audience. The first one asks about a high order of intelligence which we mentioned earlier. It could be artificial intelligence, perhaps, but they ask whether that's really the issue. Is the nature of the data substantially different, or we are just creating more of the same, so that it is a storage, plumbing, and processing problem? What, if anything, are we lacking in our current analytics capabilities that are holding us back from exploiting the Internet of Things?

Gordon: I've definitely seen that. That has a lot to do with not setting your decision objectives and your decision criteria ahead of time so that you end up collecting a whole bunch of data, and the important data gets lost in the mix. There is a term "data smog."

Most important

The solution is to figure out, before you go collecting data, what data is most important to you. If you can't collect certain kinds of data that are important to you directly, then think about how to indirectly collect that data and how to get proxies. But don't try to go and collect all the data for that. Narrow in on what is going to be most important and most representative of what you're trying to accomplish.

Gardner: Does anyone want to add to this idea of understanding what current analytics capabilities are lacking, if we have to adopt and absorb the Internet of Things?

Barsoum: There is one element around projection into the future. We've been very good at analyzing historical information to understand what's been happening in the past. We need to become better at projecting into the future, and obviously we've been doing that for some time already.

But so many variables are changing. Just to take the driverless car as an example. We've been collecting data from loop detectors, radar detectors, and even Bluetooth antennas to understand how traffic moves in the city. But we need to think harder about what that means and how we understand the city of tomorrow is going to work. That requires more thinking about the data, a little bit like what Penelope mentioned, how we interpret that, and how we push that out into the future.

Lounsbury: I have to agree with both. It's not about statistics. We can use historical data. It helps with lot of things, but one of the major issues we still deal with today is the question of semantics, the meaning of the data. This goes back to your point, Penelope, around the relevance and the context of that information – how you get what you need when you need it, so you can make the right decisions.

As soon as you talk about interoperability in the health sector, people start wondering where is their data going to go.

Gardner: Our last question from the audience goes back to Jean-Francois’s comments about the Canadian healthcare system. I imagine it applies to almost any healthcare system around the world. But it asks why interoperability is so difficult to achieve, when we have the power of the purse, that is the market. We also supposedly have the power of the legislation and regulation. You would think between one or the other or both that interoperability, because the stakes are so high, would happen. What's holding it up?

Barsoum: There are a couple of reasons. One, in the particular case of healthcare, is privacy, but that is one that you could see going elsewhere. As soon as you talk about interoperability in the health sector, people start wondering where is their data going to go and how accessible is it going to be and to whom.

You need to put a certain number of controls over top of that. What is happening in parallel is that you have people who own some data, who believe they have some power from owning that data, and that they will lose that power if they share it. That can come from doctors, hospitals, anywhere.

So there's a certain amount of change management you have to get beyond. Everybody has to focus on the welfare of the patient. They have to understand that there has to be a priority, but you also have to understand the welfare of the different stakeholders in the system and make sure that you do not forget about them, because if you forget about them they will find some way to slow you down.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Tags:  BriefingsDirect  cloud computing  Dana Gardner  Interarbor Solutions  Internet of things  Software-defined data center  software-defined networking  The Open Group  The Open Group Conference 

Share |
PermalinkComments (0)
 

How Waste Management builds a powerful services continuum across IT operations, infrastructure, development, and processes

Posted By Dana L Gardner, Wednesday, September 10, 2014

It's only been a few years since Waste Management's IT organization began rebuilding their quality assurance processes from the ground up.

"Our availability scorecard was pretty bad. Our services were down. At times, we didn’t know that our services were down. Our first indication of a problem was from customers calling us," remembers Gautam Roy, Vice President of Infrastructure, Operations and Technical Services at Waste Management in Houston, Texas.

"Now, fast-forward a few years -- with making the appropriate choices and investments in technology, such as in people and processes -- and our scorecard is very good. We know of the problems rapidly. We proactively detect problems and fix the problems before they impact our customers," he says.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn how Waste Management came to deliver 4 9s availability for its critical applications, BriefingsDirect sat down with Roy at the recent HP Discover conference in Las Vegas. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Roy: Water Management is an environmental services company. We have primarily three lines of business. First is waste service. This is our traditional waste pickup, transfer, and disposal. Our second line of business is renewable energy or green energy, and our third is recycling.

Roy

What makes Waste Management different from others in the waste industry is that we also invest quite a lot of effort in next-generation waste technology. We invest in companies like Agilyx, which converts very hard-to-recycle waste, such as plastic, into crude oil. We convert organic food waste into natural gas. We pressurize, scrub, and dry municipal solid waste into solid fuel, which burns cleaner than coal.

And we're quite diverse, a global company. We have operations in the US and Canada, Asia, and Europe. We have our renewable energy plants. There is quite a large array of technology and IT to support these business processes to ensure consistent business-services availability.

Gardner: As with many organizations, gaining greater visibility into operations -- having earlier detection of problems, and therefore earlier remediation -- means better performance. What were some of the drivers for your organization specifically to mature your IT operations?

Business transformation

Roy: I'll give a few business reasons, and a couple of technology reasons. From the business side, we began business transformation a couple of years ago. We wanted to ensure that we unlocked the value for our customers and for us, and to institutionalize the benefits for Waste Management.

Customer care, providing outstanding, world-class customer service is aligned completely with our business strategy. Business services availability is crucial, it's in our DNA. Our IT business service availability scorecard a few years ago wasn't too good. So we had to put the focus on people, process, and technology to ensure that we provide a very consistent service set to our customers.

Gardner: Moving across the spectrum of development, test, and operations can be challenging for many organizations. You have put in place standardized processes to measure, organize, and perform better across the DevOps spectrum. Tell us how you accomplished that. How did you get there?

Roy: That's a very good question. For us, IT business-service availability is really not about having a great monitoring solution. It starts even before the services are in production. It starts with partnership with our business and business requirements. It starts with having a great development methodology and a robust testing program. It starts with architecture processes, standardization, and communication. All those things have to be in place. And you have to have security services and a monitoring solution to wrap it up.

We try to approach it from the front end, instead of chasing it from the back end.

What we are trying to do is to not fight the issue at the back-end. If a service is down, our monitoring software picks it up, our operational team and engineering team jumps on it, we are able to fix the problem ASAP before it impacts the customer. Great. But, boy, wouldn’t it be nice if those services aren't going down in the first place? So we try to approach it from the front-end, instead of just chasing it from the back-end.

Gardner: So it’s Application Lifecycle Management (ALM) and Business Service Management (BSM), not one or the other, but really both -- and simultaneously?

Roy: Exactly, ALM, BSM, testing, and security products. We also want to make sure that the services are not down from intentional disruption. We want to make sure that we produce code with quality and velocity, and code that is consistent with the experience of our customer.

With our operational processes, ITIL and Lean IT, we want to make sure that the change management and incident management are followed to our prescription. We want to make sure that the disaster-recovery (DR) program, the high-availability (HA) program, the security operation center (SOC), the network operation center (NOC), and the command centers are all working together to ensure that the services are up 24/7, 365.

Gardner: And when you do this well, when you have put in place many of the capabilities that we have been describing, do you have any sense of payback? Do you keep score?

Availability scorecard

Roy: A few years ago, when we were not as good at it, we started rebuilding this all from the ground up, and our availability scorecard was pretty bad. Our services were down. At times, we didn’t know that our services were down. Our first indication of a problem was from customers calling us.

Now, fast-forward a few years, with making the appropriate choices and investments in technology -- such as in people and processes --  and our scorecard is very good. We know of the problems rapidly. We proactively detect problems and fix the problems before they impact our customers.

We have 4 9s availability for our critical applications. We're able to provide services to our customers via wm.com, our digital channel, and it has been quite a success story. We still have work to cover, but it has been following the right trajectory.

Gardner: Here at HP Discover, are there any developments that you're monitoring closely? Are there some things that you're particularly interested in that might help you continue to close the gap on quality?

We want to provide optimal solutions at a right price point for our customers and our business.

Roy: Sure. Things like understanding what's happening in the world of big data and HP’s views and position on that. I want to understand and learn about testing, software testing, how to test faster and produce better code, and to ensure, on a continuous basis that we're reducing the cost of running the business. We want to provide optimal solutions at a right price point for our customers and our business.

Gardner: On that topic of big data, are you referring to the data generated within IT, in your systems, to be able to better analyze and react to that? Or perhaps also the data from your marketplace, things that your customers might be saying in social media, for example? Or is it all of the above?

Roy: It’s all of the above. We have internal data that we're harvesting. We want to understand what it’s telling us. And we'd like to predict certain trends of our system, across the use of our applications.

Externally, we have 18 call centers. We get user calls. We also want to know our customer better and serve them the best. So we want to move into a situation where we can take their issues, frame them into solutions, and proactively service them the best in our industry.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  Gautam Roy  HP  HP ALM  HP BSM  HP DISCOVER  Interarbor Solutions  ITIL  Waste Management 

Share |
PermalinkComments (0)
 

GSN Games hits top prize using big data to uncover deep insights into gamer preferences

Posted By Dana L Gardner, Monday, September 08, 2014

It's a shame when the data analysis providers inside a company get the cold shoulder from the business leaders because the data keeps proving the status quo wrong, or contradicts the conventional corporate wisdom.

Fortunately for GSN Games in San Francisco, there's no such culture clash there. "The real thing that's helped us get to the point we are is a culture where everybody is open to being wrong -- and open to being proven wrong by the data," says Portman Wills, Vice President of Data at GSN Games.

"One of the things we use data for is to challenge all of our assumptions about our own products and our own businesses, says Wills. "It's really gotten to a point where it's almost religious in our company. The moment two people start debating what should or shouldn't happen, they say, 'Well let's just let the data decide.' That's been a core change not just for us, but for the game industry as a whole."

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

How did GSN Games get to the point where the data usually wins? It took a blazing fast data warehouse of 1.3 trillion rows that consumes, stores and produces analysis from some 110 million registered game-players in near real time. The next BriefingsDirect podcast focuses on just how GSN Games exploits such big data to effectively uncover game-changing entertainment trends for their audience. Oh, and it changes corporate cultures, too.

The discussion, at the recent HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Wills

Wills: GSN started as a cable network in the U.S. We’re distributed in 80 million households as the Game Show Network, and then we also have a digital wing that produces casual and social games on Facebook, web, tablets, and mobile. That division has 110 million registered game-players. My team takes data from all over those worlds, throws them into a big data warehouse, and starts trying to find trends and insights for both our TV audience and our online game-players.

In terms of the games, which is really where the growth is, our core demographic is older females, believe it or not, who love playing casual games. We skew more in the 55-plus age range, and we have players from all over the world.

Gardner: The word “games” means a lot of different things to a lot of people. We’re talking about a heritage of network television games back in the ’60s and ’70s that have led us to what is now your organization. But what sort of newer games are we talking about, and what proportion of them are online games, versus more of the passive watching like that on a cable or other media outlets?

Wills: Originally, when our games division started as a branch of GSN, it was companion games to Wheel of Fortune, Minute to Win It, whatever the hot game show was. That's still a part of it, but the growth in the last few years has been in social games on Facebook, where a lot of our games are more casual titles and have nothing to do with the game shows -- tile-matching games or solitaire games, for example.

In the last year or year-and-a-half for us, like everyone else, there’s been this explosion in mobile.

Then, in the last year or year-and-a-half for us, like everyone else, there’s been this explosion in mobile. So it’s iPad, Android, and iPhone games, and there we have the solitaires and the tile matching, too.

Increasingly, a lot of our success and growth has come from virtual casino games. People are playing Bingo, video poker, even slots, virtual slots. We have this title called GSN Casino. That’s an umbrella app with a lot of mini games that are casino-themed, and that one has really just exploded really in the last six months. It's a long way from the Point A of Family Feud reruns to the Point Z of virtual slot machines, but hopefully you can see how we got there.

Gardner: It seems like a long distance, but it’s been also a fairly short amount of time. It wasn't that long ago that the information you might have in your audience came through Nielsen for passive audiences, and you had basically a one- or two-dimension view of that individual, based on the estimate of what time was devoted to a show. But now, with the mobile devices in particular, you have a plethora of data.

Tell us about the types of data that you can get, and what volumes are we talking about.

Mobile experience

Wills: Let’s take mobile, because I think it's easy to grok. Everything about the device is exposed to us. The fact that you’re playing on an iPad Mini Retina versus an iPad 1 tells us a lot about you, whether you know it or not.

Then, a lot of our users sign-in via Facebook, which is another vector for information. If you sign-in via Facebook, Facebook provides us your age range, gender, some granular location information. For every player, we get between 40 and 50 dimensions of data about that player or about that device.

That’s one bucket. But the actual gameplay is another whole bucket. What games do you choose to play in our catalog? How long do you play them? What time of day do you play them? Those start to classify users into various buckets -- from the casual commute player, who plays for 15 minutes every morning and afternoon, to the hard-core player who spends 8 to 10 hours a day, believe it or not, playing our games on their mobile devices.

Mobile doesn’t necessarily mean mobile, like out and about. A lot of our players are on their iPad, sitting on the couch in their home.

At that point, and this is a little bit of a pet peeve of mine, mobile doesn’t necessarily mean mobile, like out and about. A lot of our players are on their iPad, sitting on the couch in their home.

It’s not mobility. They’re not using 3G. They’re not using augmented reality. It’s just a device that happens to be a very convenient device for playing games. So it’s much more of a laptop replacement than any sort of mobile thing. That’s sort of a side track.

We collect all of this data, and it’s a fair amount. Right now, we’re generating about 900 million events per day across all of our players. That’s all streamed into our HP Vertica data warehouse, and there are a few tables, event time series tables, that we put the stuff into. A small table for us would be a few hundred billion records, and a large table, as I said, is 1.3 trillion records right now.

So the scale is big for us. I know that for other companies that seems like peanuts. It’s funny how big data is so broad. What’s big to one person is tiny to someone else, but this is the world that we’re dealing in right now.

We have 110 million players. Thankfully, not all of them are active at one time. That would be really big data. But we will have about 20 million at any given time in peak time playing concurrently. That’s a little bit about the numbers in our data warehouse.

Gardner: Understanding your audience through this data is something fairly new. Before, you couldn’t get this amount of data. Now that you have it, what is it able to do for you? Are you crafting new games based on your findings? Are you finding information that you can deliver back to a marketer or advertiser that links them to the audience better? There must be many things you can do.

No advertising

Wills: First of all, we don’t do any advertising in our mobile games. So that’s one piece that we’re not doing, although I know others are. But there are two broad buckets in which we use data. The first is that we run a lot of the A/B tests, experiments. All of our games are constantly being multivariate tested with different versions of that same game in the field.

We run 20 to 40 tests per week. As an example, we have a Wheel of Fortune game that we recently released, and there was all this debate about the difficulty of the puzzles. How hard should the puzzles be? Should they be very obscure pieces of Eastern literature, mainstream pop culture, or even easier?

So, we tested different levels of difficulty. Some players got the easy, some players got the medium, and some players got the hard ones. We can measure the return rate, the session duration, and the monetization for people who buy power-ups, and we see which level of difficulty performs the best. In the first test of easy, medium, hard, easy overwhelmingly did the best.

So we generated a whole bunch of new puzzles that were even easier than were the previous easy ones and tested that against what was now the control level. The easier puzzles won again. So we generated a whole new set of puzzles that were absurdly easy. We were trying to prove the point that if we gave Wheel of Fortune puzzles that are four-letter words like “bird” and “cups,” nobody would enjoy playing something that simplistic.

Well it turns that they do -- surprise, surprise -- and so that’s how we evolved into a version of Wheel of Fortune that, compared to the game show, looks very different, but it’s actually what customers want. It’s what players want. They want to relax and solve simple puzzles like “door.”

Hopefully faster than overnight. Overnight is a little too slow these days.

Gardner: So Vertica analysis determined that everyone is a winner on GSN, but you’re able to do real-time focus-group types of activities. The data -- because it's so fast, because there is so much information available and you can deal with it so quickly -- means that you’re able to tune your games to the audience virtually overnight.

Wills: Hopefully faster than overnight. Overnight is a little too slow these days. We push twice a day both to our platform code and updates to all of our games in the morning around 11 a.m and in the afternoon around 3:30. Each one of those releases is based on the data that came from the prior release.

So we're constantly evolving these games. I want to go back to your previous question, because I only got to talk about one bucket, which is this experimentation. The other bucket is using the usage patterns that customers have to evolve our product in ways that aren’t necessarily structured around an A/B test.

We thought when we launched our iPhone app that there would be a lot of commuting usage. We had in our head this hypothetical bus player, who plays on the bus in the morning. And so we thought we would build all the stuff around daily patterns. We built this daily return bonus that you can do in the morning and then again in the evening.

The data showed us that that really was only a tiny fraction of our players. There were, in fact, very few players who had this bimodal, morning and evening usage pattern. Most people didn't play at all until after dinner and then they would play a lot, sometimes even binge from 7 p.m. until 2 a.m. on games.

False assumptions

That was an area where we didn't even set up an experiment. We just had false assumptions about our player base. And that happens a surprising amount of the time. We all -- especially the game-design team and people who spent their careers designing video games -- have assumptions about their audience that half the time are just wrong. One of the things we use data for is to challenge all of our assumptions about our own products and our own businesses.

It's really gotten to a point where it's almost religious in our company. The moment two people start debating what should or shouldn't happen, they say, “Well let's just let the data decide.” That's been a core change not just for us, but for the game industry as a whole.

Because we’re here in Spain, a quick tidbit that we uncovered recently is that our main time-frame in every country on Earth, when people play games, is 7 p.m. to 11 p.m., except in Spain where it’s 1 p.m. to 3 p.m. -- siesta time. That’s just one of the examples of how we use big data to use discover insights about our players and our audiences worldwide.

Understanding the audience

Gardner: I have to imagine that the data that led you to that inference in Spain was something other than what we might consider typical structured data. How did the different data brought together allow you to understand your audience better?

Wills: We use this product from HP called Vertica, which is just a tremendous data warehouse, that lets us throw every single click, touch, or swipe in all of our games into a big table. By big, I mean right now it’s I think 1.3 trillion rows. We keep saying that we should really archive this thing. Then, we say we’ll archive it when it slows down, and then it just never slows down, so we have yet to archive it.

We put all of the click stream data in there. The traditional joins, schemas, and all of that don’t really have to happen because we have one table with all of the interactions. You have the device, the country, the player, all these attributes. It’s a very wide table. So if you want to do things like ask what is the usage in five-minute slices by country, it’s a simple SQL query, and you get your results.

Gardner: What you’re describing is very much desired by a lot of types of businesses through understanding a massive amount of data from their audience, to be able to react quickly to that, and then to stop guessing about products and pricing and distribution and logistics and supply chain and be driven purely by the data. You’re a really interesting harbinger of things to come.

One of the things we use data for is to challenge all of our assumptions about our own products and our own businesses.

Portman, tell me little bit about the process by which you were able to do this. Did you have an older data warehouse? What did you use before, and how did you make a transition to HP Vertica?

Wills: When we started the social mobile business three years ago, we were on MySQL, which we are still on for our transactional load. We have three data centers around the world. When people are playing our games, it’s recording, reading, and writing 125,000 transactions per second, and that MySQL, sharded out, works great for that.

When you want to look at your entire player base and do a cross-shard query, we found that MySQL really fell down. Our original Vertica proof of concept (POC) was just to replace these A/B test queries, which have to look across the entire population.

So in comes Vertica. We set up a single node, a Vertica data warehouse. We pull in a year's worth of data, and the same query to synthesize these sessions ran in 800 milliseconds.

So the thing that took 24 hours, which is 86,400 seconds, ran in less than one second. By the way, that 24-hour query was running across dozens of machines, and this Vertica query was running on a single server of commodity hardware.

That's when we really became believers in the power of the column store and column-oriented data warehouses. From the small beginning of just one simple query, it’s now expanded -- and pretty much our whole business runs on top of HP Vertica on the data warehouse side.

Lessons learned

Gardner: As I said, I think GSN Games is a really harbinger of what a lot of other companies in many different vertical industries will be seeking. Looking back, if you had to do it again, what might you have done differently or what suggestions might you have for others who would like to be able to do what you are doing?

Wills: I definitely wish that we had switched to a column store sooner. I think the reason that we've been so successful at this is because of our game design team, which was so open to using data.

I definitely wish that we had switched to a column store sooner.

I’ve heard hard stories from other companies where they want to use a data-driven approach, and there's just a lot of cultural inertia and push back against doing that. It's hard to be consistently proven wrong in your job, which is always what happens when you rely on data.

The real thing that's helped us get to the point we are in is a culture and a company where everybody is open to being wrong -- and open to being proven wrong by the data, which I am very thankful for.

Gardner: Well, it's good to be data-driven, and I think you should feel good being responsible for making 110 million people feel good about themselves every day.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  big data  BriefingsDirect  Dana Gardner  data analytics  GSN Games  HP  HP Vertica  HPDiscover  Interarbor Solutions  Portman Wills  SQL 

Share |
PermalinkComments (0)
 

Hybrid cloud models demand more infrastructure standardization, says global service provider Steria

Posted By Dana L Gardner, Friday, August 22, 2014

The old model of just being an outsourcer or on-premises service provider is dead for many IT solutions providers. Instead, we’re all now in a hybrid world where will have some private-cloud solutions and multiple public clouds. The challenge is to have the right level of governance, and to be in a position to move the workloads, and adjust the workloads with the needs.

These words of wisdom come from European IT services provider Steria, which along with hundreds of its customers are charting a journey to hybrid cloud while maintaining control, automation, and reporting across all IT infrastructure.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how services standardization leads to improved hybrid cloud automation, BriefingsDirect spoke to Eric Fradet, Industrialization Director at Steria in Paris. The discussion, at the recent HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Fradet: Steria is a 40-year-old service provider company, mainly based in Europe, with a huge location in India and also Singapore. We provide all types of services related to IT, starting from infrastructure management to application management. We help to develop and deploy new IT services for all our customers.

Gardner: How are your activities at Steria helping you better deliver the choice of cloud and software-as-a-service (SaaS) to your customers?

Fradet: That change may be quicker than expected. So, we must be in a position to manage the services wherever they’re from. The old model of saying that we’re an outsourcer or on-premises service provider is dead. Today, we’re in a hybrid world and we must manage that type of world. That must be done in collaboration with partners, and we share the same target, the same ambition, and the same vision.

Benefit, not a pain

The cloud must not be seen as disruptive by our customers. Cloud is here to accompany your transformation. It must be a benefit for them, and not a pain.

Fradet

A private solution should be the best as a starting point for some customers. The full public solution should be a target. We’re here to manage their journey and to define with the customer what is the best solution for the best need.

Gardner: And in order for that transition from private to public or multiple public or sourced-infrastructure support, a degree of standardization is required. Otherwise, it's not possible. Do you have a preferred approach to standardization?

Fradet: The choice of HP as a partner was based on two main criteria. First of all, the quality of the solution, obviously, but there are multiple good solutions on the market. The second one is the capacity with HP to have a smooth transition, and that means getting to the industrialization benefits and the economic benefits while also being open and interconnected with existing IT systems.

That's why the future model is quite simple. Our work is to know we have on-premises and physical remaining infrastructure. We will have some private-cloud solutions and multiple public clouds, as you mentioned. The challenge is to have the right level of governance, and to be in a position to move the workload and adjust the workloads with the needs.

We continue to invest deeply in ITSM because ITSM is service management.

Gardner: Of course, once you've been able to implement across a spectrum of hosting possibilities, then there is the task of managing that over time, being able to govern and have control.

Fradet: With HP, we have a layer approach which is quite simple. First of all, if you want to manage, you must control, as you mentioned. We continue to invest deeply in IT Service Management (ITSM) because ITSM is service governance. In addition, we have some more innovative solutions based on the last version of  Cloud Services Automation (CSA). Control, automate, and report remain as key whatever the cloud or non-cloud infrastructure.

Gardner: Of course, another big topic these days is big data. I would think that a part of the management capability would be the ability to track all the data from all the systems, regardless of where they’re physically hosted. Do you have a preference or have you embarked on a big-data platform that would allow you to manage and monitor IT systems regardless of the volume, and the location?

Fradet: Yes, we have some very interesting initiatives with HP around HAVEn, which is obviously one of the most mature big-data platforms. The challenge for us is to transform a technologically wonderful solution into a business solution. We’re working with our business units to define use-cases that are totally tailored and adjusted for the business, but big data is one of our big challenges.

Traditional approach

Gardner: Have you been using a more traditional data-warehouse approach, or are you not yet architecting the capability? Are you still in a proof-of-concept stage?

Fradet: Unfortunately, we have hundreds of data-warehouse solutions, which are customer-dedicated, starting from very old-fashioned level to operational key performance indicators (KPI) to advanced business intelligence (BI).

The challenge now is really to design for what will be top requirements for the data warehouse, and you know that there is a mix of needs in terms of data warehouses. Some are pure operational KPIs, some are analytics, and some are really big data needs. To design the right solution for the customer remains a challenge. But, we’re very confident that with HAVEn, sometime in 2014, we will have the right solution for those issues.

Gardner: Lastly, Eric, the movement toward cloud models for a lot of organizations is still in the planning stages. They are mindful of the vision, but they have also IT  housecleaning to do internally. Do you have any suggestions as to how to properly modernize, or move toward a certain architecture that would then give them a better approach to cloud and set them up for less risk and less disruption? What are some observations that you have had for how to prepare for moving toward a cloud model?

Cloud can offer many combinations or many benefits, but you have to define as a first step your preferred benefits.

Fradet: As with any transformation program, the cloud’s eligibility program remains key. That means we have to define the policy with the customer. What is their expectation -- time to market, cost saving, to be more efficient in terms of management?

Cloud can offer many combinations or many benefits, but you have to define as a first step your preferred benefits. Then, when the methodology is clearly defined, the journey to the cloud is not very different than from any other program. It must not be seen as disruptive, keeping in mind that you do it for benefits and not only for technical reasons or whatever.

So don't jump to the cloud without having strong resources below the cloud.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  big data  BriefingsDirect  cloud computing  CSA  Dana Gardner  Eric Fradet  HAVEn  HP  HPDiscover  hybrid cloud  Interarbor Solutions  ITSM  SaaS  Steria 

Share |
PermalinkComments (0)
 

Service providers gain new levels of actionable customer intelligence from big data analytics

Posted By Dana L Gardner, Monday, August 11, 2014

It’s no secret that communication service providers (CSPs) are under a lot of pressure as they make massive investments in upgraded networks while facing shrinking margins and revenues from their eroding traditional voice or broadcasting businesses.
Traditional operators understand that they must go beyond what they did before. They need to offer more compelling services to reduce churn and acquire new customers. But how to know what services customers want most, and how much to charge for them?

A key asset CSPs have is the huge amount of information that they generate and maintain. And so it's the analytics from their massive data sets that becomes the go-to knowledge resource as CSPs re-invent themselves.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Our next Big Data innovation discussion therefore explores how the telecommunication service-provider industry is gaining new business analytic value and strategic return through the better use and refinement of their Big Data assets.

To learn more about how analytics has become a business imperative for service providers, peruse this interview with Oded Ringer, Worldwide Solution Enablement Lead for HP Communication and Media Solutions. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the major trends leading CSPs to view themselves as being more data-driven organizations?

Ringer: CSPs are under a lot of pressure. On one hand, this industry has never been more central. Everybody is connected, spending so much more time online than ever before, and carrying with them small devices through which they connect to the network. So CSPs are central to our work and personal lives – as a result, they’re under lot of pressure.

Ringer

They’re under a lot of pressure, because they’re required to make massive investments in the networks, but they also need to deal with shrinking margins and revenues to subsidize these investments. So, at the end of the day, they’re squeezed between these two motions. 

One approach many CSPs have adopted in the last year was to reduce cost and to cut operations. But this is pretty much a trip to nowhere. Going into most basic services and commodity services is no way for these kinds of things to survive. 

In the last two to three years, more and more traditional operators understand that they must go beyond what they did before. They need to offer more compelling services to reduce churn and acquire new customers. They need to leverage their position as a central place between consumers and what they are looking for to become some kind of brokers of information

The key asset they have in their hand to become such brokers is the huge amount of information that they maintain. It’s exactly where analytics comes into play.

Talking about mobile

Gardner: When we say CSP and telecommunication companies these days, we’re more and more talking about mobile, right? How big a shift has mobile been in terms of the need to analyze use patterns and get to know what's really happening out in the mobile network?

Ringer: Mobile services are certainly the leading tool in most operator’s arsenals. Operators that have the subscriber “connected” with them wherever they go, around the clock, have an advantage over those that are more dependent upon or only provide tethered services. 

But we need to keep in mind that there’s also a whole space for analytics solutions that are related to fixed-line services, like cablesatellitebroadband, and other, landline services. CSPs are investing a lot in becoming more predictive, finding out what the subscriber really wants, what the quality of those services are at any given time, and how we can reduce churn in their customer base. 

Another kind of analytics practices that operators take is trying to be predictive in their investments in the network, understanding which network segments are used by more high-worth individuals, those that they do want to improve service to, beefing up those networks and not the other networks.

Again, it’s these mobile operators who are on the front lines of doing more with subscriber data and information in general, but it is also true for cable operators and pay-TV operators, and landline CSPs.

CSPs, unlike most enterprises, need to handle not only the structured data that’s coming from databases and so on, but also unstructured data.

Gardner: Oded, what are some of the data challenges specific to CSPs?

Ringer: In the CSP industry, Big Data is bigger than in any other industry. Bigger, first of all, in volume. There is no other industry that runs this amount of data – if you take into consideration they’re carrying everybody’s data, consumer and enterprise. But that’s one aspect and is not even the most complicated one. 

The more complicated thing is the fact that CSPs, unlike most enterprises, need to handle not only the structured data that’s coming from databases and so on, but also unstructured data, such as web communication, voice communication, and video content. They want to analyze all those things, and this requires analyzing unstructured data. 

So that’s a significant change in that type of process flow. They are also facing the need to look at new sets of structured data, data from IT management and security log files, from sensors and end-point mobile device telematics, cable set-top boxes, etc.

And two, in the CSP industry, because everything is coming from the wire, there’s no such thing as off-line analytics or batch analytics. Everything needs to be real-time analytics. Of course, this doesn’t mean that there will not be off-line or batch analytics, but even these are becoming more complex and span many more data sets across multiple enterprise silos.

More real time

If you analyze subscriber behavior right now and you want to make an offer to improve the experience that he’s having in real time, you need to capture the degradation of service right now and correlate it with what you know about the subscriber right now. So it's so much more real time than in any other industry. 

The market is still young. So it's very hard to say which one will be more dominant.

We’re not talking here about projects of data consolidation. It may be necessary in some cases, but that’s not really the practice that we’re talking about here. We’re talking about federating, referring to external information, analyzing in the context of the logic that we want to apply, and making real-time decisions.

In short, CSP Big Data analytics is Big Data analytics on steroids.

Gardner: What does a long-term solution look like, rather than cherry picking against some of these analytics requirements? Is there a more strategic overview approach that would pay off longer term and put these organizations in a better position as they know more and more requirements will be coming their way?

Ringer: Actually we see two kinds of behaviors. The market is still young. So it's very hard to say which one will be more dominant. We see some CSPs that are coming to us with a very clear idea on what business process they want to implement and how they believe a data-driven approach can be applied to it. 

They have clear model, a clear return on investment (ROI) and they want to go for it and implement it. Of course, they need the technology, the processes, and the business projects, but their focus is pretty much on a single use case or a variety of use cases that are interrelated. That’s one trend.

There’s another trend in which operators say they need to start looking at their data as an asset, as an area that they want to centralize. They want to control it in a productive manner, both for security, for privacy, and for the ability to leverage it to different purposes.

Central asset

Those will typically come with a roadmap of different implementations that they would like to do via this Big Data facility that they have in mind and want to implement. But what’s more important for them is not the quickest time to launch specific processes, but to start treating the data as a central asset and to start building a business plan around it. 

I guess both trends will continue for quite some while, but we see them both in the market sometimes even in the same company in different organizations.

Gardner: How does a CSP can really change their identity from being a pipe, a conduit, to being more of a rich services provider on top of communications?

And what is it that HP is bringing to the table? What is it about HP HAVEn, in particular, that is well suited to where the telecommunications industry is going and what the requirements are?

Ringer: HP has made huge investments in the space of Big Data in general and analytics in particular, both in-house developments, multiple products, as well as acquisitions of external assets. 

Complete platform

HAVEn is now the complete platform that includes multiple best-in-class product elements based multiple, cutting edge yet proven technologies, for exploiting Big Data and analytics. Our solution for the space is pretty much based on HAVEn and expanded with specific solutions for CSP needs, with a wide gallery of connectors for external data sources that exist within the CSP space. 

In short, we’re taking HAVEn and using it for the CSP industry with lots of knowledge about what traditional CSP operators need to become next-generation CSPs. Why? 

Because we have a very large group within HP of telecom experts who interact with and leverage what we’re doing in other industries and with many of the new age service providers like the AmazonsGooglesFacebook and Twitters of the world. We go a long way back in expertise in telecom -- but combine this with forward thinking customers and our internal visionaries in HP Labs and across our business units. 

Gardner: Just to be clear for our audience, HAVEn translates to HadoopAutonomyVertica, and Enterprise Security, along with a whole suite of horizontally and vertically integrated set of applications that are vertical industry specific. Is that right?

It’s coming from the business people that understand that they need to do something with the data and monetize it.

Ringer: Exactly.

Gardner: Tell me what you do in terms of how you reach out to communications organizations. Is there something about meeting them at the hardware level and then alerting them to what these other Big Data capabilities are? Is this a cross-discipline type of approach? How do you actually integrate HP services and then take that and engage with these CSPs?

Ringer: Those things exist, like engaging at a hardware level, but those are the less common go-to-market motions that we see. The more popular ones are more top-down, in the sense that we are meeting with business stakeholders who wants to know how to leverage Big Data and analytics to improve their business. 

They don’t care about the data other than how it’s going to be result in actionable intelligence. So, at the CSP level, it can be with marketing officers within the CSP who are looking to create more personalized services or more sticky services to increase the attention of their subscribers. They’re looking to analytics for that. 

It can be with business-development managers within the CSP organization that are looking to create models of collaboration with the Yahoos and Facebooks of the world, with retailers, or with any kind of other participants of their ecosystem where they can bring the ability to provide the pipe, back-end hosting of services and intelligence about how the pipe is providing the services and the sentiment of the customers on the other end of the pipe. 

They want to share information of value to their customers, making them dependent on them in new ways that aren’t just about the pipe thereby gaining new revenue streams. That’s the kind of motivation they have. It can be with IT folks as well, but at the end of the day the discussion about CSP Big Data isn’t coming from the technology. It’s coming from the business people that understand that they need to do something with the data and monetize it.

Then, of course, it becomes pretty quickly a technical discussion that the motion is business to technology, rather than infrastructure to technology. 

Support practice

We also developed the support practice within our organization that does exactly that, business advisory workshops. It’s for stakeholders of different roles to realize what the priorities are in using Big Data. What is the roadmap that they want to implement? 

The purpose of this exercise is to quickly bring everybody to the same room, sit together for a day or two, and come out with an agreement on how to turn themselves from conventional services to more personalized services and diversify the business channels via using information data.

For several years now, we have one large customer, Telefónica a Latin American conglomerate, has been working with us on analytics projects to improve the quality of experience of their subscribers. 

In Latin America, most people are interested in football, and many of them want to watch it on their mobile device. The challenge is that they all want to watch it during the same 90 minutes. That’s a challenge for any mobile operator, and that’s exactly where we started a critical project with Telefónica. 

We’re helping them analyze the quality of experience. Realizing the quality of the experience isn’t a very complicated thing. There are probes in the network to do that. We can pretty accurately get the quality of experience for every single video streaming session. It’s no big deal.

Analytics kicks in when you want to correlate this aggregation of quality with who the subscriber is, how the subscriber is expected to behave, and what he’s interested in. We know that the quality isn’t good enough for many subscribers during the football game, but we need to differentiate and know to which one of them we want to make an offer to upgrade his package. What’s the right offer? When’s the right time to make the offer? How many different offers do we test to zero in on the best set of offers?

We want to know which one of them we don’t want to promote anything to, but just want to make him happy. We want to give him a better quality experience for free, because he is a good customer and we don’t want to lose him. And we want to know which customer we want to come back to later, apologize, and offer him a better deal.

Real-time analytics

Based on real-time triggering of events from the network, degradation of quality with information that is ongoing about the subscriber, who the subscriber is, what marketing segment he belongs to, what package is he subscribed to and so on, we do the analytics in real time, and decide what the right action is and what the right move is, in order for us to give the best experience for the individual subscriber. 

It’s working very nicely for them. I like this example, first of all, because it’s real, but also because it shows the variety of processes we have here with correlation of real-time information with ongoing information for the subscribers. We have contextual action that is taken to monetize and to improve quality and to improve satisfaction. 

This example touches so many needs of an operator and is all done in a pretty straightforward manner. The implementation is rather simple. It’s all based on running the right processes and putting the right business process in place. But this isn’t always straightforward for enterprise customers, particularly those in the small to medium enterprise segment so imagine what CSPs could do for their customers once they’ve gotten a handle on this for their own businesses.

We have contextual action that is taken to monetize and to improve quality and to improve satisfaction. 

Gardner: It seems to me that that helps reduce the risk of a provider or their customers coming out with new services. If they know that they can adjust rapidly and can make good on services, perhaps this gives them more runway to take off with new services, knowing that they can adjust and be more agile. It seems like it really fundamentally changes how well they can do their business.

Ringer: Absolutely. It also reduces quite a lot the risk of investment. If you launch a new service and you find out that you need to beef up your entire network, that is a major hit for your investment strategy. At the same time, if you realize that you can be very granular and very selective in your investment, you can do it much more easily and justify subsequent investments more clearly.

Gardner: Are there any other examples of how this is manifesting itself in the market -- the use of Big Data in the telecommunication’s industry? 

Ringer: Let me give another example in North America. This is an implementation that we did for a large mobile operator in North America, in collaboration with a chain of retail malls. 

What we did there is combine their ongoing information that the mobile operator has about its subscribers -- he knows what the subscriber is interested in, what they’re prior buying pattern and transactions were and so on -- with the location information of where the individual person is at the mall. 

The mall operator runs a private wi-fi network there, so he has his own system of being able to track where the individual is exactly within the mall. He knows within two meters where a person is in the mall but with the map overlay of the physical mall and all product and service offerings to the same grid.

When we know a person is in the mall, we can correlate it with what the CSP knows about this person already. He knows that the specific person has high probability of looking for a specific running shoe. The mobile operator knows it because he tracks the web behavior of the specific individual. He tracks the profile of the specific individual and he can have pretty good accuracy in telling that this guy, for the right offer, will say yes for running shoes. 

Targeted and timely

So combining these two things, the ongoing analytics of the preferences, together with real-time location information, give us the ability to push out targeted and timely promotions and coupons.

Imagine that you go in the mall and suddenly you pass next to the shoe store. Here, your device pops up a message and that says right now, Nike shoes are 50 percent off for the next 15 minutes. You know that you’re looking for Nike shoes. So the chance that you’ll go into the store is very good, and the results are very good because you create a “buy-now or you’ll miss-out” feeling in the prospect. Many subscribers take the coupons that are pushed to them in this way. 

Of course, it’s all based on opt-in, and of course, it’s very granular in the sense that there are analytics that we do on subscriber information that is opted in at the level of what they allow us to look at. For instance, a specific person may allow us to look at his behavior on retail sites, but not on financial sites. 

Gardner: Again, this shows a fundamental shift that the communications provider is not just a conduit for information, but can also offer value-added services to both the seller and the buyer -- radically changing their position in their markets. 

If I am an organization in the CSP industry and I listen to you and I have some interest in pursuing better Big Data analytics, how do I get started? Where can I go for more information? What is it that you’ve put together that allows me to work on this rather quickly?

Ringer: As I mentioned before, we typically recommend engaging in a two-day workshop with our business consultants. We have a large team of Big Data advisory consultants, and that’s exactly what they do. They understand the priorities and work together with the telecom organizations to come up with some kind of a roadmap -- what they want to do, what they can do, what they are going to do first, and what they are going to do later. 

They all look to become more proactive, they all realize that data is an asset and is something that you need to keep handy, keep private, and keep secured.

That’s our preferred way of approaching this discipline. Overall, there are so many kinds of use cases, and we need to decide where to start. So that’s how we start. To engage, the best place is to go to our website. We have lots of information there. The URL is hp.com/go/telcoBigData, that’s one word, and from there you just click Contact Us, and we’ll get back to you. We’ll take you from there. There are no commitments, but chances are very good.

Gardner: Before we sign off, I just wanted to look into the future. As you pointed out, more and more entertainment and media services are being delivered through communication providers. The mobile aspect of our lives continues to grow rapidly. And, of course, now that cloud computing has become more prominent, we can expect that more data will be available across cloud infrastructures, which can be daunting, but also very powerful. Where do you see the future challenges, and what are some of the opportunities?

Ringer: We can summarize four main trends that we’re seeing increasing and accelerating. One is that CSPs are becoming more active in enabling new business models with partnerships, collaborations, internet players, and so on. This is a major trend. 

The second trend that we see increasing quite intensively is operators becoming like marketing organizations, promoting services for their own or for others.

The third one is more related to the operation of the CSP itself. They need to be more aware of where they invest, what’s their risk and probability of seeing an specific ROI and when will that occur. In short, Big Data and Analytics will make them smarter and more proactive in making the investments. That’s another driver that increases their interest in using the data. 

Overall they all look to become more proactive, they all realize that data is an asset and is something that you need to keep handy, keep private, and keep secured, but be able to use it for variety of use cases and processes to be ready for the next move. 

Tags:  big data  BriefingsDirect  CSPs  Dana Gardner  data analysis  HP  Interarbor Solutions  Oded Ringer  service providers  structured data  unstructured data 

Share |
PermalinkComments (0)
 
Page 1 of 55
1  |  2  |  3  |  4  |  5  |  6  >   >>   >| 
Page Title
Association Management Software Powered by YourMembership.com®  ::  Legal/Privacy