Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Dana Gardner's BriefingsDirect for Connect.
Blog Home All Blogs
Longtime IT industry analyst Dana Gardner is a creative thought leader on enterprise software, SOA, cloud-based strategies, and IT architecture strategies. He is a prolific blogger, podcaster and Twitterer. Follow him at http://twitter.com/Dana_Gardner.

 

Search all posts for:   

 

Top tags: Dana Gardner  Interarbor Solutions  BriefingsDirect  HP  cloud computing  The Open Group  big data  SaaS  HPDiscover  VMWare  virtualization  Ariba  data center  enterprise architecture  data analytics  HP DISCOVER  SOA  HP Vertica  Ariba Network  Open Group Conference  SAP  security  Ariba LIVE  VMWorld  Tony Baer  desktop virtualization  Jennifer LeClaire  mobile computing  TOGAF  Business Intelligence 

Ariba’s digital handshake helps Caesars up the ante on supply chain diversity

Posted By Dana L Gardner, Tuesday, June 16, 2015

The next BriefingsDirect business trends interview focuses on Caesars Entertainment Corp. and how they're transforming supplier discovery and improving their supplier diversity through collaboration across cloud-based services and open business networks.

Learn from Caesars' best practices on how they expand diversity across their supply chain and how that’s been accomplished using Ariba Discovery. We’ll hear first-hand how one supplier, M & R Distribution Services, has benefited from such supplier visibility on the business network.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. 

For the inside story on improved supply chain visibility and access, please join our guests, Jessica Rosman, Director of Supplier Diversity and Sustainability at Caesars Entertainment based in Las Vegas, and Quentin McCorvey, Sr., President and COO of M&R Distribution Services, based in Cleveland. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are some of the more difficult aspects of finding the right supplier for the right job under the right circumstances?

Rosman

Rosman: Oftentimes, our portfolio managers look into their natural networks of suppliers we’ve already used or suppliers who have contacted us, but that can be limiting. Having a wider network or using the Discovery tool on Ariba has allowed us to open up to millions of different suppliers that we haven’t met before and who we might want to do business with us.

Additionally, we do numerous outreach events into the communities in which we operate, so we can find top suppliers and include them in our supply chain.

Gardner: What sort of supplier requirements are there, and has that been changing over the years? Is there a moving target for this?

Rosman: For Caesars, it really depends on the category or commodity that we're searching for. Certain commodities may require larger supply chains or more integrated processes than others. But for all of our suppliers, we're looking for quality, service, and price. That may also include requirements around insurance, delivery time, or other needs to meet those three areas.

Gardner: People are familiar with the Caesars’ name, but your organization includes a lot more. Tell us about the breadth and scope of your company.

Rosman: Caesars Entertainment is the largest globally diversified casino network. We're also the home of Horseshoe, Harrah’s, Total Rewards, Paris, Rio, and obviously, the most famous, Caesars Palace in Las Vegas.

Gardner: Quentin, tell us a little bit about M&R and why getting the visibility from folks like Caesars has been a good thing for you?

McCorvey: M&R Distribution Services, my company, was established in 2008 by my partner Joe Reccord and myself. I came out of banking and had experience and a background of 12 years in banking. My partner has been in the distribution business for over 20 years as a market leader in a regional distribution company. We're primarily focused on distributing products such as disposable gloves. Most maintenance, repair, and operations (MRO) product lines are in our portfolio, as well as personal protective equipment and trash liners.

McCorvey

You asked how this has been important for us or how Caesars’ relationship has been important for us. It has been very important, because, as Jessica said, we found each other through some of their outreach events that they have in the community.

It was through a National Minority Supplier Development Council. My company is a nationally certified minority business. With this networking event and through a matchmaking event, I found someone on Jessica’s team, Bridget Carter, and learned a little bit more about Caesars and the opportunities that happened within Caesars. Then, through further connections, we had some opportunities that led to a strong relationship.

Gardner: What is it about making these connections between buyer and seller that’s easier today? What’s changed in the past several years?

It's about relationships

Rosman: Technology has changed, but some things haven’t changed. At the end of the day, business is about relationships. To start that relationship, there are new ways that we can meet different businesses by doing outreach and having the Ariba Discovery tool, where we can team up buyers and sellers through using Naics codes, UnPsc codes, or other types of codes. Using those, we can find those who want to sell and those who want to buy.

But part of it is the same as it has always been, which is about having that face-to-face connection, knowing that there is a potential relationship and feeling comfortable that that business will deliver on the quality, the service, and the need for the internal customer that there always has been.

Gardner: As to your title, Supplier Diversity and Sustainability, how important is that? How did that come about and what are your goals?

Rosman: Caesars Entertainment has a code of commitment. Our code of commitment is our code that says that we have a responsibility to the community, to the environment, to our customers, and to our employees to be the best that we can be. Under that code of commitment and in line with it is our Supplier Diversity Program. Our Supplier Diversity Program sits within our sourcing office, but also has a dotted line into the Diversity Department overall.

We are in unique areas across the country. When we do outreach within the community, in part it’s because in order for our businesses to grow, it’s important that we find community and local business partners that can meet the 24-hour, seven-days-a-week business that we have.

It’s different than other business types that have a delivery on Monday and don’t need it again until next week. That outreach has allowed us to find small, medium-size, and large businesses that are minority-owned, women-owned, veteran-owned, and other diverse businesses that can meet those needs.

Gardner: Quentin, tell me a bit about how long you’ve been working with Caesars? Is this strictly in Ohio with some of their properties there? Is it expanded across the company? Have you got a beachhead that’s then expanded? What’s the nature of the business you have?

While I didn’t win that opportunity, what I did win was the entrée into a relationship with Caesars.

McCorvey: We initially got engaged with Caesars, as I mentioned, through an outreach program, and through that, an opportunity came up for me to bid on a project with Caesars. Because I had bid on that project, I had to get connected to the Ariba Network. While I didn’t win that opportunity, what I did win was the entrée into a relationship with Caesars. Jessica talked about how a relationship is important, and for minority business, clearly, it’s really about relationship development.

As a minority-owned company, I'm not looking for handout. I'm looking for handshake, an opportunity to earn the business of a customer. I have to prove myself in being able to produce tier 1 pricing capacity and helping in solving pains within the supply chain network. Even with not getting this opportunity, I continued having conversations with Caesars and continued to develop the relationship.

Caesars has a mentoring program, which I was involved in and had the pleasure to become a part of. Through that mentoring program, I was able to sit down with Caesars and discuss certain goals that I wanted to accomplish, not only with my business personally, but also with the business opportunity with Caesars.

Some of those things included meeting the category managers in the categories where I was supplying into the organization and really understanding how to grow my key performance indicators (KPIs), not only directly, but also with Caesars and some of the other opportunities that are there.

Mentoring program

Through this mentoring program, we began to work on the relationship. I began to meet other people within the supply chain more regionally, as well as the national folks -- from Jessica and her team to up and down and across the Caesars organization. That’s been a very important process for me.

That's how we started out. I've gotten, and I'm going to get, opportunities through the mentoring program to start serving the company regionally. There are casinos in Ohio. My primary markets are servicing the Ohio casinos. Then, moving out of the region is a goal, ultimately growing into being a national supplier with all 52 properties within Caesars casinos.

Gardner: How important have Ariba Discovery and the Ariba Network been for you? How did you get on it? Was it easy? And where else have you been able to extend this visibility?

I actually won a couple of opportunities through the system and through the Ariba Network.

McCorvey: I got into the Ariba Network accidentally on purpose. On purpose because I had an opportunity to bid on a national contract with Caesars. When I had that opportunity, I got an invite from the buyer to sign up into Ariba. So I had to put my profile in there in order to bid on the opportunity that was available to me.

I did that, and it was a quick turnaround on the bid. I spent all of my time trying to figure out how to get through this, how to get my profile updated, and how to get the bid engaged.

I didn’t really know that much about the network and how connected the matches were to opportunities. I started seeing alerts and I started seeing, direct opportunities that really connected with my business. Through that, I said let me investigate a little bit further. And when I did, I began to look at some other opportunities. I actually won a couple of opportunities through the system and through the Ariba Network.

When I say "accidentally and on purpose," I guess it was fortuitous that we had this opportunity to bid. Even though it wasn't a win directly with that opportunity, it was a win for me and my company.

Gardner: Jessica, how about from the buyer side at Caesars, using the network, having the data, the insights, and the visibility. Has that added more value to your process? Obviously, you’ve got a certain specialization, but is there a more general value that you're seeing over time?

Rosman: We've used Ariba Network for a quite a while now. We started off with request for proposal (RFP) or the sourcing phase or module. We extended to the contracting phase or module and then we eventually went to the procure-to-pay.

We've seen a plethora of Ariba services, each one adding and building upon prior Ariba services that we had used. In all of those areas, it’s beneficial, because the lessons learned from a past RFP are archived and you can go back in and find RFPs that were used in the past.

When we're mentoring suppliers, especially within our Supplier Diversity Program, talking to minority or women suppliers, it helps us to know what some of the contract managers might be asking, or a little bit more about the categories. We don't pull the entire RFP. We don’t share all of those pieces, but unique items that might be applicable to future questionnaires. That goes all the way through to the procure to pay (P2P). It keeps it easy in one place and it archives the data for us.

Real standardization

Gardner: It sounds like you have a real standardization about how you are going about these things. Is that fair to say?

Rosman: Yes, I believe it is. Our sourcing team has evolved throughout this process to a category-driven leadership approach, and Ariba has been an integral part of that.

Gardner: Any thoughts or recommendations with 20/20 hindsight now for other organizations that are looking for specific requirements in the suppliers that they're targeting?

Rosman: As we continue to grow, Ariba also continues to grow in this area of supplier diversity. Using Ariba Discovery has also helped us when we're trying to find minority women or vendors in unique industries.

An example of that is also in Ohio. We were looking to find a women-owned or minority-owned company in that region that sells carbon dioxide. We put it into Ariba Discovery assuming that we wouldn’t find anybody that we hadn’t already met through our outreach events.

When you're looking for hard-to-reach vendors and looking for that opportunity and connection, it just takes it one step further.

We had done very extensive outreach events in the community and talked to more than 300 local vendors and yet we still were able to do find some. When you're looking for hard-to-reach vendors and looking for that opportunity and connection, it just takes it one step further.

Gardner: Quentin, I imagine that, as a business owner, you're curious about what new business opportunities are available. Has the visibility within the Ariba environment, seeing what alerts come across, seeing what the bids are about, led you to pursue other business opportunities and lines of business within your company? Has it helped you grow?

McCorvey: It has definitely helped us to grow. When I initially looked at the Ariba Network, I saw it as a procurement platform. But for me it's actually more of a supply chain accelerator, and I say that because as with any good business what's important is deal flow, how you get projects and opportunities in the pipeline.

Ariba has been a minimal level of inputs with a maximum level of outputs. So as a company and as a smaller growing company, you’re constantly looking at ways to grow opportunities, to grow market share. Do you invest $20,000, $30,000, $40,000 in a B2B website? Do you engage in Google Analytics? Do you put sales executives in other parts of the country to begin to grow?

Those are all the decisions you have to make every day with a limited amount of resources, because you really want to put that into growing your company. Ariba has has been able to do that. I don't necessarily have to have a larger sales team or some of the other things out there. I can begin to look at opportunities where I can grow my company in other markets. I can service those markets. It also gives me access to other Fortune 100 and 200 companies that I don't necessarily have the access to, to begin to look at.

A lot of ideas

What's important for me is to get a lot of ideas. Jessica talked a little bit about the archived RFPs. But really mining through those archived RFPs, I can see what companies are looking for, what their RFPs have been about, when are their sales cycles coming up again, when can I begin to look at those opportunities and target those opportunities, who are the purchasing and procurement managers that’s managing those lines.

That’s tough data to find. It’s tough to be able to find out who, for example, is procuring resins for a company. You can Google over their website, you can search for it, and you can’t find it, but you will never find that opportunity. It really, really closes down the sales cycle loop for me and gives me maximum value.

Gardner: Well, we're here at Ariba LIVE, and there's lots of news being made. We're hearing about integrated services for travel and expenses. We’re seeing more emphasis on the user experience, end-to-end processes that would end up in a mobile environment or any number of environments.

What's of interest to you? Where do you see yourselves taking advantage of some of these new technological and process innovations?

Rosman: One of the areas that's most interesting is learning about how to implement Ariba within your internal team and externally. We've done a great job of it within Supplier Diversity Program, but how do we roll that out further amongst our entire supply chain? The takeaway is how can we train internally and train externally to find results using Ariba?

It's worth spending some time really understanding how Ariba works and what are the components there within the system.

Gardner: Quentin, any thoughts about what’s of interest to you and then perhaps words of advice you could give other companies that are trying to improve their business using a business network?

McCorvey: Jessica hit on it again. Technology is really driving the market. My partner, who has been in the business for 25 years, often tells a story about how when he first started out. He left home every day with a pocket full of quarters and a pager. That day is gone. This is not your father's Oldsmobile. We really had to begin to leverage technology in a different way.

As a distributor, I'm looking at, and have been typically looking at, the sales side. How can I look at opportunities here? But what’s also been important for me to see and really learn is that I can look at it on the buy side. How can I not only find other manufacturing partners to begin to drive more cost out of my supply chain and even be more competitive in my business and my business environment.

Relative to advice for other customers, other people or other suppliers who are using the network, it's worth spending some time really understanding how Ariba works and what are the components there within the system. Ariba has some very knowledgeable account executives who work directly with you. You need to spend some time with your account executive to make sure that you update your profile to the point where you can get maximum amount of exposures to the maximum amount of hits.

To reiterate what I said before, it’s important to not only look at to the opportunities that are available to you, but closed opportunities, and see where you can begin to look at opportunities, and see if there are other business ideas or business partnerships that you can develop through the Ariba Network.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

You may also be interested in:

Tags:  Ariba  Ariba Discovery  Ariba LIVE  Ariba Network  BriefingsDirect  Dana Gardner  Interarbor Solutions  Jessica Rosman  networked economy  Quentin McCorvey 

Share |
PermalinkComments (0)
 

Redcentric orchestrates networks-intensive merger using advanced configuration management database

Posted By Dana L Gardner, Monday, June 15, 2015

The next BriefingsDirect performance management discussion uncovers how Redcentric PLC in the UK tackled a major network management project due to a business merger. We'll hear how Redcentric used an advanced configuration database approach to scale management of some 10,000 devices across two disparate companies and made them accessible as a single system.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more about how two major networks became merged successfully using automation based on systems data, we're joined by Edward Jackson, Operational System Support Manager at Redcentric in Harrogate, UK. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us a little bit about your company and this merger. What two companies came together, and how did that prove to be a complicated matter when it comes to network management?

Enable your network to enable your business

Download the brochure 

Maintain an efficient, secure network

Jackson: The two companies coming together were InTechnology and Redcentric. Redcentric bought InTechnology in 2013. Effectively, they were reasonably separate in terms of their setup. Redcentric had three separate organizations, they had already acquired Maxima and Hot Chilli. And the requirement was to move their network devices and ITSM platform base onto the HP monitoring and ITSM platforms in InTechnology.

It’s an ongoing process, but it’s well on the way and we've been pretty successful so far in doing that.

Gardner: And what kind of companies are these? Tell us about your organization, the business, rather than just the IT?

Jackson

Jackson: We're a managed service providers (MSPs), voice, data, storage, networks, and cloud. You name it, and we pretty much deliver it and sell it as part of our managed portfolio..

Gardner: So being good at IT is not just good for you internally; it's really part and parcel of your business.

Jackson: It's critical. We have to deliver it and we have to manage it as well. So it's 100 percent critical to the business.

Gardner: Tell us how you go about something like this, Edward, when you have a big merger, when you have all these different, disparate devices that support networks. How do you tackle that? How do you start the process?

Data cleansing

Jackson: The first phase is to look at the data and see what we've got and then start to do some data cleansing. We had to migrate data from three service desks to the InTechnology network, and to the InTechnology ITSM system. You need to look at all the service contracts. You need to also look at all the individual components that make up those contracts, and effectively all the configuration items (CIs), and then your looking at a rather large migration project.

Initially, we started to migrate the customer and the contact information. Then, slowly, we started to re-provision devices from the Redcentric side to the InTechnology Managed Services (IMS) network and load it into our HP management platforms.

We currently manage over 11,000 devices. They are from multiple types of vendors and technologies. InTechnology was pretty much a Cisco shop, whereas at Redcentric, we're looking at things like Palo Alto, Brocade, Citrix load balancers and other different types of solutions. So it's everything from session border controllers down to access points.

It was a relatively challenging time in terms of being able to look at the different types of technology and then be able to manage those. Also, we've automated incidents from Operations Manager to Service Manager and then notifying customers directly that there is a potential issue ontheir service. So it's been a rather large piece of work.

Gardner: Was there anything in hindsight that you did at InTechnology vis-à-vis the data about your network and devices that made this easier? Did Redcentric have that same benefit of that solid database, the configuration information? In doing this, what did you wish you had done, or someone else had done, better before that would have made it easier to accomplish?

It was a relatively challenging time in terms of being able to look at the different types of technology and then be able to manage those.

Jackson: Unfortunately, the data on the Redcentric side of the business wasn’t quite as clean as it was on the InTechnology side. It was held in lots of differnet sources, from network shared drives to Wiki pages. It all had to be collated. Redcentric had another three service desks. We had to extract all the data out of them as well. The service desks didn’t really contain any CI information either. So we had to collate together the CI information along with the contacts and customers.

It was a rather mammoth task. Then, we had to load it into our CRM tool, which then has a direct connection automatically using Web Services and into Service Manager. So it initially creates organizations and contacts.

We had a template for our CIs. If they were a server CI or a network CI, it would be added to a spreadsheet, and would use HP Connect-IT to load into Service Manager. It basically automatically created CIs against the customer and the contacts that were already loaded by our CRM tool.

Gardner: Is there anything now moving forward as a combined company, or in the process of becoming increasingly combined, that these due diligence efforts around network management and configuration management will allow you to do?

Perhaps you're able to drive more services into your marketplace for your customers or make modernization moves towards perhaps software-defined networking or other trends that are afoot. So now that you are into this, you are doing your due diligence, how does that set you up to move forward?

New opportunity

Jackson: It opens up a new sphere of opportunity. We were pretty much a Cisco shop, but now we have obviously opened up to a lot more elements and technologies that we actively manage.

We have a lot of software-based type of firewalls and load balancers that we didn’t previously have -- session border controllers, etc and voice products that we didn’t deliver previously -- that we can deliver now due to the fact that we've opened up the network to be able to monitor and manage pretty much anything.

Gardner: Any words of advice for other organizations that may have been resisting making these moves. You were forced to do it across the board with the merger. Do you have any advice that you would offer in terms of doing network management and modernization sooner rather than later, other than the fact that people might just think good enough is good enough, or if it's not broken, don’t fix it?

Jackson: When you're looking at a challenge like this, you have to make sure you do your due diligence first. It’s down to planning, an "if you fail to plan, you plan to fail" kind of thing, and it’s very true.

Enable your network to enable your business
Download the brochure
Maintain an efficient, secure network

You need to get all the information. You need to make sure that you normalize it and sanitize it before you load it. The cliché is garbage in, garbage out, so there’s no point in putting bad information into a system once again.

We have a good set of clean data now across the board. We literally have 150,000 CIs in our CMDB. So it’s not an insignificant CMDB by any stretch of the imagination. And we know that the data from the Redcentric side of the business is now clean and accurate.

Gardner: How about proving this to the business? For MSPs it might not be as critical, but for other enterprises, this might be a bit more of a challenge to translate these technical benefits into financial or economic benefits to their leadership. Any thoughts about metrics of success that you've been able to define that would fit into a return on investment (ROI) or more of an economic model? How do you translate network management proficiency into dollars and cents or pounds or euros?

Jackson: It’s pretty difficult to quantify in a monetary sense. Probably the best way of quantifying the success of the project has been the actual level of support that customers have been given and the level of satisfaction that the customers now have. They're very, very happy with the level of support that we have now achieving due to Redcentrics ITSM and business service management (BSM) systems. I think, going forward, it will only increase the level of support that we can provide our customers.

As I said, It's quite difficult to quantify in a monetary sense. However, when churn rates are now as low as 4 percent, you can basically say that you're doing something good.

Fundamental to the business

In terms of things like the CIs themselves, the CI is fundamental to the business, because it describes the whole of the service, all the services that we offer our customers. If that’s not right, then the support that we give the customer can’t be right either.

You need to give the guys on support the kind of information they need to be able to support the service. Customer satisfaction is ever increasing in terms of what we are able to offer the migrated customers.

Gardner: How about feedback from your help desk, your support, and remediation of people. Do they find that with this data in place, with it cleansed, and with it complete that they're able to identify where problems exist perhaps better, faster, and easier. Do they recognize whether there is a network problem or a workload support problem, the whole help desk benefit. Anything to offer there?

The CI is fundamental to the business, because it describes the whole of the service, all the services that we offer our customers. If that’s not right, then the support that we give the customer can’t be right either.

Jackson: About 80 percent of the tickets raised in the organization are raised through our management platform, monitoring and performance capacity monitoring. We can pretty much identify within a couple of minutes where the network error is. This all translates into tickets being auto raised in our service management platform.

Additionally, within a few minutes of an outage or incident we can have an affected customer list prepared. We have fields that are defined in Service Manager CI’s that will actually give us information regarding what devices are affected and what they are connected to in terms of an end to end service.

We run a customer report against this, and it will give you a list of customers, a list of key contacts and primary contacts. You can convert this into an email. So for a network outage, within a few minutes we can email the customer, create an incident, create related interactions to that incident, and the customer is notified that there is an issue.

Gardner: That’s the sort of brand reinforcement and service quality that many organizations are seeking. So that's enviable, I'm sure.

Is there any products or updates that could make your job even easier going forward?

Jackson: We're looking at a couple of things. One of them is HP Propel, which is a piece of software that you can hook into pretty much anything you really want. For example, if you have a few disparate service desks, you can have a veneer over the top. They'll look all the same to the customers. They'll have like an identical GUI, but the technology behind it could be very different.

Enable your network to enable your business

Download the brochure

Maintain an efficient, secure network

It gives you the ability then to hook into anything, such as HP Operations Orchestration, Service Manager, Knowledge Management, or even Smart Analytics, which is another area that we are quite keen on looking at. I think that’s going to revolutionize the service desk. It would be very, very beneficial forRedcentric..

There are also things like data mining. This would be beneficial and also help the auto creation of knowledge articles going forward and giving remedial action to incidents and interactions.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested:

Tags:  BriefingsDirect  configuration darabase  Dana Gardner  Edward Jackson  HP  HPDiscover  Interarbor Solutions  ITSM  Network Management  Redcentric 

Share |
PermalinkComments (0)
 

HP at Discover delivers the industry's first open, hybrid, ecosystem-wide cloud architecture

Posted By Dana L Gardner, Tuesday, June 02, 2015

Kicking off Discover 2015, HP today made a wide range of announcements, including industry-wide inclusive enhancements to their heterogeneous Helion cloud portfolio, new DevOps-friendly agile test automation solutions, expanded converged infrastructure offerings with partner Arista, as well as an all-flash expansion of their 3PAR StoreServ products.

HP's open, ecosystem-wide cloud vision marks, in my opinion, the IT industry's first and most inclusive architecture that cuts across all major cloud services, "pubic" and "private," PaaS and IaaS. The HP approach, leveraging open source and standards, provides much more choice to how enterprises exploit cloud-centric hybrid IT -- but without running the risk of being exploited themselves.

"We're the only company that brings it all to you. ... A cloud that works with your infrastructure. ... The way that you want to transform. ... With the right financial architecture for you. ... And we don't dictate to you how to do it," said HP CEO Meg Whitman in her opening keynote address that the HP Discover conference in Las Vegas.

Specifically, HP announced updates to the HP Helion portfolio, designed to help enterprises transition to a broadly hybrid IT. HP introduced HP Helion CloudSystem 9.0, the next release of its flagship integrated enterprise cloud solution, and enhancements to HP Helion Managed Cloud Services for managing enterprise workloads in hosted cloud environments.

The expanded support for multiple hypervisors and cloud environments in HP Helion CloudSystem 9.0 gives enterprises and service providers added flexibility to gain cloud benefits for their existing and new applications.

"Enterprise customers have a range of needs in moving to the cloud. Some need to cloud-enable traditional workloads, while others seek to build next generation 'cloud native' apps using modern technologies like OpenStack, Cloud Foundry, and Docker," said Bill Hilf, senior vice president, HP Helion Product and Service Management. "The expanded support for multiple hypervisors and cloud environments in HP Helion CloudSystem 9.0 gives enterprises and service providers added flexibility to gain cloud benefits for their existing and new applications." [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP Helion CloudSystem forms a cross-cloud, private-cloud solution, designed to help enterprises and service providers attain hybrid infrastructure capabilities -- enabling automation, orchestration and control across multiple heterogeneous clouds, workloads, and technologies, says HP. HP is calling itself a transition partner, not just a vendor or cloud provider.

HP Helion CloudSystem 9.0 expands support for multiple hypervisors and multiple clouds to provide enterprises and service providers with maximum flexibility. Additionally, HP Helion CloudSystem 9.0 integrates HP Helion OpenStack and the HP Helion Development Platform to provide customers an enterprise grade open source Cloud Foundry PaaS for cloud native application development and infrastructure.

Features and benefits

HP Helion CloudSystem 9.0 features and benefits include:

  • Simultaneous support for multiple cloud environments, including Amazon Web Services (AWS), Microsoft Azure, HP Helion Public Cloud, OpenStack technology and VMware, with the ability to fully control where workloads reside.
  • The latest release of HP Helion OpenStack, exposing OpenStack software APIs to simplify and speed development and integration with other clouds and offering developer-friendly add-ons with the HP Helion Development Platform based on Cloud Foundry.
  • Support for multiple hypervisors, now including Microsoft Hyper-V, Red Hat KVM, VMware vSphere, as well as bare-metal deployments, offering customers additional choice and avoiding vendor lock-in.
  • Support for AWS-compatible private clouds through integration with HP Helion Eucalyptus, giving customers the flexibility to deploy existing AWS workloads onto clouds they control.
  • Support for unstructured data through the Swift OpenStack Object Storage project
  • The latest version of HP Cloud Service Automation, providing the management capabilities to control hybrid cloud environments and a built-in path to support distributed compute, efficient object storage and rapid cloud native application development
  • An intuitive setup model delivered as a virtual appliance, allowing for installation in hours

    HP Helion Managed Cloud Services provides enterprise security and high availability capabilities needed to run mission-critical business applications.

Given that enterprises spend up to 90 percent of their IT budget on maintaining existing systems, HP estimates that enterprises can reduce IT maintenance costs by approximately 40 percent by migrating existing systems to a clouds-based architecture.

HP Helion CloudSystem 9.0 is available as standalone software supporting a multiple-vendor hardware environment or as a fully-integrated blade-based or hyper-converged infrastructure with HP ConvergedSystem. Availability is planned for later this year.

HP Helion Managed Cloud Services will launch into beta later this year HP Helion OpenStack Managed Private Cloud and HP Helion Eucalyptus Managed Private Cloud, both of which will be consumable as a service via an easy access portal.

In addition to these new beta offerings, HP Helion Managed Cloud Services will support the development of cloud native applications within a managed cloud service via the HP Helion Development Platform and automation of select virtual private cloud services.

HP Helion Managed Cloud Services features and benefits include:

  • New automated provisioning capabilities through a self-service portal based on HP Cloud Service Automation, enabling clouds to be deployed more quickly.
  • Support for multiple platforms to enable hybrid cloud proof-of-concepts using HP Helion OpenStack and HP Helion Eucalyptus.
  • Cloud native application development capabilities through integration with the HP Helion Development Platform, allowing enterprises to rapidly develop, deploy and deliver cloud native apps.

Helping developers "shift left"

HP also announced a new functional test automation solution, HP LeanFT, which allows software developers and testers to leverage continuous testing and continuous delivery methodologies to rapidly build, test, and deliver secure, high-quality applications. In many ways, it accelerates the adoption of agile and DevOps, but in a managed way.

HP LeanFT embraces the Agile methodology "shift left" concept by leveraging the key tools of the modern Agile developer ecosystem, says HP. It's built specifically for continuous testing and continuous delivery, and fits naturally into existing ecosystems (such as Microsoft TFS, GIT, and Subversion) and frameworks that support test driven and behavior driven development. It has powerful test automation authoring with either C# or Java, and IDE integration. It forms an enabling test foundation for improved DevOps.

LeanFT will be the bridge from UFT to the future with increased focus on Agile developers, flexible licensing, better cross-browser testing, mobile testing, and IoT testing.

In the most recent Forrester Wave on Modern Application Functional Test Automation, Forrester states: "HP UFT vision will appeal to developers. HP's vision and three-year road map is anchored on LeanFT, which, if executed in a timely fashion, will appeal to testers and developers. In fact, LeanFT will be the bridge from UFT to the future with increased focus on Agile developers, flexible licensing, better cross-browser testing, mobile testing, and Internet of Things (IoT) testing as further key elements of the road map."

The new solution integrates with HP Application Lifecycle Management, Quality Center, and Mobile Center, which allows developers and testers to reduce maintenance costs, share testing resources, and deliver new mobile applications at Agile speed. HP also introduced major upgrades to its flagship HP Unified Functional Testing and HP Business Process Testing products, including support for GIT integration as a repository option and scriptless keyword-driven testing, says HP.

"HP LeanFT beautifully balances the twin imperatives of velocity and quality by allowing developers to operate in the modern Agile and DevOps ecosystem, while also leveraging our proven capabilities in application testing and application lifecycle management," says Raffi Margaliot, SVP and GM, HP Application Delivery Management.

HP LeanFT will be available in July 2015 on http://saas.hp.com. HP Unified Functional Testing 12.5 and HP Business Process Testing 12.5 will also be available in July. Customers who upgrade to HP UFT 12.5 will receive HP LeanFT free of charge.

HP Application Defender is available now for free on five pre-production application instances. For more information, please visit http://go.saas.hp.com/application-defender-trial.

In other news

In other news, HP announced that it was expanding its converged infrastructure portfolio, including enhancements to its HP OneView 2.0 management platform, a new partnership with Arista Networks, and a series of new workload optimized reference architectures. These new offerings will give customers the flexibility they need to transform to hybrid architecture and at the same time, protect their existing IT investments.

OneView unifies processes, UI’s and APIs across HP server, storage, and Virtual Connect networking devices. HP states that OneView can be configured in 96 percent less time, taking only five steps to deploy a VMware vSphere Cluster, and has nine times faster error resolution.

New features include automated server change management, server profile templates (making it easier to define firmware and driver baselines as well as server, LAN and SAN settings), and new profile mobility that enable migration and recovery of workloads across server platform types, configurations, and generations.

The partnership with Arista Networks is beneficial for customers who are looking for flexibility in infrastructure that handles performance-intensive, virtualized and highly dynamic workloads.

The partnership with Arista Networks, which delivers software-defined networking solutions for data centers, cloud computing, and HPC environments, is beneficial for customers who are looking for flexibility in infrastructure that handles performance-intensive, virtualized and highly dynamic workloads.

These new solutions are designed to support private, public and hybrid cloud applications while providing a flexible and open choice across compute and storage, including all-flash solutions with HP 3PAR StoreServ.

HP announced new 3PAR flash storage models. The HP 3PAR StoreServ 20000 enterprise family is due to ship in August and features a 20850 all-flash model and a 20800 converged flash array that supports hard-disk drives and solid state drives.

Both start at two controllers and can scale out to eight. The 20850 can hold up to 1024 solid state  drives, ranging from 480 GB to a new 3.84 TB drive.

HP also took advantage of Discover to announce that DreamWorks Animation has selected HP to automate its IT infrastructure. By deploying  HP Datacenter Care - Infrastructure Automation, HP is enabling DreamWorks Animation to manage its infrastructure as code for continuous delivery of applications and services.

You may also be interested in:

Tags:  big data  BriefingsDirect  cloud computing  Dana Gardner  HP  HP DISCOVER  HP Helion  Interarbor Solutions  testing 

Share |
PermalinkComments (0)
 

How Tableau Software and big data come together: Strong visualization embedded on an agile analytics engine

Posted By Dana L Gardner, Friday, May 29, 2015

The next BriefingsDirect big data innovation discussion highlights how Tableau Software and big data analytics platforms come together to provide visualization benefits for those seeking more than just crunched numbers. They're looking for ways to improve their businesses effectively and productively, and to share the analysis quickly and broadly.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more, BriefingsDirect sat down with Paul Lilford, Global Director of Technology Partners for Tableau Software, based in Seattle, and Steve Murfitt, Director of Technical Alliances at HP Vertica. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why is the tag-team between Tableau and big data so popular. Every time I speak with some one using Vertica, they inevitably mention that they're delivering their visualizations through Tableau. This seems to be a strong match.

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

Lilford: We’re a great match primarily because Tableau’s mission is to help people see and understand data. We're made more powerful by getting to large data, and Vertica is one of the best at storing that. Their columnar format is a natural format for end users, because they don’t think about writing SQL and things like that. So, Tableau, as a face to Vertica, empowers business users to self serve and deliver on a depth of analytics that is unmatched in the market.

Lilford

Gardner: Now, we can add visualization to a batch report just as well as a real-time. streamed report. What is it about visualization that seems to be more popular in the higher-density data and a real-time analysis environment?

Lilford: The big thing there, Dana, is that batch visualization will always common. What’s a bigger deal is data discovery, the new reality for companies. It leads to becoming data driven in your organization, and making better-informed decisions, rather than taking a packaged report and trying to make a decision that maybe tells you how bad you were in the past or how good you might think you could be in the future. Now, you can actually have a conversation with your data and cycle back and forth between insights and decisions.

The combination of our two technologies allows users to do that in a seamless drag-and-drop environment. From a technical perspective, the more data you have, the deeper you can go. We’re not limiting a user to any kind of threshold. We're not saying, this is the way I wrote the report, therefore you can go consume it.

We’re saying, "Here is a whole bunch of data that may be a subject area or grouping of subject areas, and you're the finance professional or the HR professional. Go consume it and ask the questions you need answered." You're not going to an IT professional to say, "Write me this report and come back three months from now and give it to me." You’re having that conversation in real time in person, and that interactive nature of it is really the game changer. 

Win-win situation

Gardner:  And the ability for the big data analysis to be extended across as many consumer types in the organization as possible makes the underlying platform more valuable. So this, from HP's perspective must be a win-win. Steve?

Murfitt: It definitely is a win-win. When you have a fantastic database that performs really well, it's kind of uninteresting to show people just tables and columns. If you can have a product like Tableau and you can show how people can interact with that data, deliver on the promise of the tools, and try to do discovery, then you’re going to see the value of the platform.

Murfitt

Gardner: Let’s look to the future. We've recently heard about some new and interesting trends for increased volume of data with the Internet of Things, mobile, apps being more iterative and smaller, therefore, more data points.

As the complexity kicks in and the scale ramps up, what do you expect, Paul, for visualization technology and the interactivity that you mentioned? What do you think we're approaching? What are some of the newer aspects of visualization that makes this powerful, even as we seek to find more complexity?

Lilford: There are a couple of things. Hadoop, if you go back a year-and-a-half or so, has been moving from a cold-storage technology to more to a discovery layer. Some of the trends in visualization are predictive content being part of the everyday life.

Tableau democratizes business intelligence (BI) for the business user. We made it an everyday thing for the business user to do that. Predictive is in a place that's similar to where BI was a couple years ago, going to the data scientist to do it. Not that the data scientist's value wasn’t there, but it was becoming a bottleneck to doing things because you have to run it through a predictive model to give it to someone. I think that's changing.

So I think that predictive element is more and more part of the continuum here. You're going to see more forward-looking, more forecast-based, more regression-based, more statistical things brought into it. We’ll continue to innovate with some new visuals, but the standard visual is unstructured data.

This is the other big key, because 80 percent of the world's data is unstructured. How do you consume that content? Do you still structure it or can you consume it where it sits, as it sits, where it came in and how it is? Are there discoverers that can go do that?

You’re going to continue see those go. The biggest green fields in big data are predictive and unstructured. Having the right stores like Vertica to scale that is important, but also allowing anyone to do it is the other important part, because if you give it to a few technical professionals, you really restrict your ability to make decisions quickly.

Gardner: Another interesting aspect, when I speak to companies, is the way that they're looking at their company more as an analytics and data provider internally and externally. The United States Postal Service  view themselves in that fashion as an analytics entity, but also looking for business models, how to take data and analysis of data that they might be privy to and make that available as a new source of revenue.

I would think that visualization is something that you want to provide to a consumer of that data, whether they are internal or external. So we're all seeing the advent of data as a business for companies that may not have even consider that, but could.

Most important asset

Lilford: From our perspective, it's a given that it is a service. Data is the most important asset that most companies have. It’s where the value is. Becoming data driven isn’t just a tagline that we talk about or people talk about. If you want to make decisions and decisions that move your business, so being a data provider.

The best example I can maybe give you, Dana, is healthcare. I came from healthcare and when I started, there was a rule -- no social. You can't touch it. Now, you look at healthcare and nurses are tweeting with patients, "Don’t eat that sandwich. Don't do this."

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

Data has become a way to lower medical costs in healthcare, which is the biggest expense. How do you do that? They use social and digital data to do that now, whereas five, seven years ago, we couldn't do it. It was a privacy thing. Now, it's a given part of government, of healthcare, of banking, of almost every vertical. How do I take this valuable asset I’ve got and turn it into some sort of product, market, or market advantage, whatever that is?

Gardner: Steve, anything more to offer on the advent or acceleration of the data-as-a-business phenomena?

Murfitt: If you look at what companies have been doing for such a long time, they have been using the tools to look at historical data to measure how they're doing against budget. As people start to make more data available, what they really want to do is compare themselves to their peers.

As people start to make more data available, what they really want to do is compare themselves to their peers.

If you're doing well against your budget, it doesn't mean to say you gaining or losing market share or how well you’re doing. So as more data is shared and more data is available, being able to compare to peers, to averages, to measure yourself not only internally, but externally, is going to help with people making their decisions.

Gardner: Now for those organizations out there that have been doing reports in a more of a traditional way that recognize the value of their data and the subsequent analysis, but are yet to dabble deeply into visualization, what are some good rules of the road for beginning a journey towards visualization?

What might you consider in terms of how you set up your warehouse or you set up your analysis engine, and then make tools available to your constituencies? What are some good beginning concepts to consider?

Murfitt: One of the most important things is start small, prove it, and scale it from there. The days of boiling the ocean to try come up with analytics only to find out it didn’t work are over.

Organizations want to prove it, and one of the cool things about doing that visually is now the person who knows the data the best can show you what they're trying to do, rather than trying to push a requirement out to someone and ask "What is it you want?" Inevitably, something’s lost in translation when that happens or the requirement changes by the time it's delivered.

Real-time conversation

You now have a real-time, interactive, iterative conversation with both the data and business users. If you’re a technical professional, you can now focus on the infrastructure that supports the user, the governance, and security around it. You're not focused on the report object anymore. And that report object is expensive.

It doesn’t mean that for compliance things the financial reports go away, it means you've right sized that work effort. Now, the people who know the data the best deliver the data, and the people who support the infrastructure the best support that infrastructure and that delivery.

It’s a shift. Technologies today do scale Vertica as a great scalable database. Tableau is a great self-service tool. The combination of the two allows you to do this now. If you go back even seven years, it was a difficult thing. I built my career being a data warehouse BI guy. I was the guy writing reports and building databases for people, and it doesn’t scale. At some point, you’re a bottleneck for the people who need to do their job. I think that's the biggest single thing in it.

Gardner: Another big trend these days is people becoming more used to doing things from a mobile device. Maybe it’s a “phablet,” a tablet, or a smartphone. It’s hard to look at a spreadsheet on those things more than one or two cells at a time. So visualizations and exercising your analytics through a mobile tier seem to go hand in hand. What should we expect there? Isn't there a very natural affinity between mobile and analysis visualization?

Most visuals work better on a tablet. Right-sizing that for the phone is going to continue to happen.

Lilford: We have mobile apps today, but I think you're going to see a fast evolution in this. Most visuals work better on a tablet. Right-sizing that for the phone is going to continue to happen, scaling that with the right architecture behind it, because devices are limited in what they can hold themselves.

I think you'll see a portability element come to it, but at the same time, this is early days. Machines are generating data, and we're consuming it at a rate at which it's almost impossible to consume. Those devices themselves are going to be the game changer.

My kids use iPads, they know how to do it. There’s a whole new workforce in the making that knows this and things like this. Devices are just going to get better at supporting it. We're in the very early phases of it. I think we have a strong offering today, and it's only going to get stronger in the future.

Gardner: Steve, any thoughts about the interception between Vertica, big data, and the mobile visualization aspect of that?

Murfitt: The important thing is having the platform that can provide the performance. When you're on a mobile device, you still want the instant access, and you want it to be real-time access. This is the way the market is going. If you go with the old, more traditional platforms that can’t perform when you're in the office, they're not going to perform when you are remote.

It’s really about building the infrastructure, having the right technology to be able to deliver that performance and that response and interactivity to the device wherever they are.

Working together

Gardner: Before we close, I just wanted to delve a little bit more into the details of how HP Vertica and Tableau software work. Is this an OEM, a partnership, co-selling, co-marketing? How do you define it for those folks out there who either use one or the other or neither of you? How should they progress to making the best of a Vertica and Tableau together?

Lilford:  We're a technology partnership. It’s a co-selling relationship, and we do that by design. We're a best-in-breed technology. We do what we do better than anyone else. Vertica is one of the best databases and they do what they do better than anyone else. So the combination of the two, providing customers options to solve problems, the whole reason we partner is to solve customer issues.

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

We want to do it as best-in-breed. That’s a lot what the new stack technologies are about, it’s no longer a single vendor building a huge solution stack. It's the best database, with the best Hadoop storage, with the best visualization, with the best BI tools on top of it. That's where you're getting a better total cost of ownership (TCO) over all, because now you're not invested in one player that can deliver this. You're invested in the best of what they do and you're delivering in real-time for people.

It's the best database, with the best Hadoop storage, with the best visualization, with the best BI tools on top of it.

Gardner: Last question, Steve, about the degree of integration here. Is this something that end user organizations can do themselves, are there professional services organizations, what degree of integration between Vertica and Tableau visualization is customary.

Murfitt: Tableau connects very easily to Vertica. There is a dropdown on the database connector saying, "Connect to Vertica.” As long as they have the driver installed, it works. And the way their interface works, they can start query and getting value from the data straight away.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  big data  BriefingsDirect  Dana Gardner  data analytics  HP  HP Vertica  Interarbor Solutions  Paul Lilford  Steve Murfitt  Tableau 

Share |
PermalinkComments (0)
 

The Open Group panel explores how standards thwart thorny global cybersecurity issues

Posted By Dana L Gardner, Wednesday, May 27, 2015

How can global enterprise cybersecurity be improved for better enterprise integrity and risk mitigation? What constitutes a good standard, or set of standards, to help? And how can organizations work to better detect misdeeds, rather than have attackers on their networks for months before being discovered?

These questions were addressed during a February panel discussion at The Open Group San Diego 2015 conference. Led by moderator Dave Lounsbury, Chief Technology Officer, The Open Group, the speakers included Edna Conway, Chief Security Officer for Global Supply Chain, Cisco; Mary Ann Mezzapelle, Americas CTO for Enterprise Security Services, HP; Jim Hietala, Vice President of Security for The Open Group, and Rance DeLong, Researcher into Security and High Assurance Systems, Santa Clara University.

Download a copy of the full transcript. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.] 

Here are some excerpts:

Dave Lounsbury: We've heard about the security, cybersecurity landscape, and, of course, everyone knows about all the many recent breaches. Obviously, the challenge is growing in cybersecurity. So, I want to start asking a few questions, directing the first one to Edna Conway.

Lounsbury

We've heard about the Verizon Data Breach Investigation of DBIR report that catalogs the various attacks that have been made over the past year. One of the interesting findings was that in some of these breaches, the attackers were on the networks for months before being discovered.

What do we need to start doing differently to secure our enterprises?

Attend The Open Group Baltimore 2015
July 20-23, 2015
Early bird registration ends June 19

Edna Conway: There are a couple of things. From my perspective, continuous monitoring is absolutely essential. People don't like it because it requires rigor, consistency, and process. The real question is, what do you continuously monitor?

It’s what you monitor that makes a difference. Access control and authentication, should absolutely be on our radar screen, but I think the real ticket is behavior. What kind of behavior do you see authorized personnel engaging in that should send up as an alert? That’s a trend that we need to embrace more.

Conway

The second thing that we need to do differently is drive detection and containment. I think we try to do that, but we need to become more rigorous in it. Some of that rigor is around things like, are we actually doing advanced malware protection, rather than just detection?

What are we doing specifically around threat analytics and the feeds that come to us: how we absorb them, how we mine them, and how we consolidate them?

The third thing for me is how we get it right. I call that team the puzzle solvers. How do we get them together swiftly?

How do you put the right group of experts together when you see a behavior aberration or you get a threat feed that says that you need to address this now? When we see a threat injection, are we actually acting on the anomaly before it makes its way further along in the cycle?

Executive support

Mary Ann Mezzapelle: Another thing that I'd like to add is making sure you have the executive support and processes in place. If you think how many plans and tests and other things that organizations have gone through for business continuity and recovery, you have to think about that incident response. We talked earlier about how to get the C suite involved. We need to have that executive sponsorship and understanding, and that means it's connected to all the other parts of the enterprise.

Mezzapelle

So it might be the communications, it might be legal, it might be other things, but knowing how to do that and being able to respond to it quickly is also very important.

Rance DeLong: I agree on the monitoring being very important as well as the question of what to monitor. There are advances being made through research in this area, both modeling behavior -- what are the nominal behaviors -- and how we can allow for certain variations in the behavior and still not have too many false positives or too many false negatives.

Also on a technical level, we can analyze systems for certain invariants, and these can be very subtle and complicated invariance formulas that may be pages long and hold on the system during its normal operation. A monitor can be monitoring both for invariance, these static things, but they can also be monitoring for changes that are supposed to occur and whether those are occurring the way they're supposed to.

Jim Hietala: The only thing I would add is that I think it’s about understanding where you really have risk and being able to measure how much risk is present in your given situation.

DeLong

In the security industry, there has been a shift in mindset away from figuring that we can actually prevent every bad thing from happening towards really understanding where people may have gotten into the system. What are those markers that something is gone awry and reacting to that in a more timely way -- so detective controls, as opposed to purely preventative type controls.

Lounsbury: We heard from Dawn Meyerriecks earlier about the convergence of virtual and physical and how that changes the risk management game. And we heard from Mary Ann Davidson about how she is definitely not going to connect her house to the Internet.

So this brings new potential risks and security management concerns. What do you see as the big Internet of Things (IoT) security concerns and how does the technology industry assess and respond to those?

Hietala: In terms of IoT, the thing that concern me is that many of the things that we've solved at some level in IT hardware, software, and systems seemed to have been forgotten by many of the IoT device manufacturers.

Hietala

We have pretty well thought out processes for how we identify assets, we patch things, and we deal with security events and vulnerabilities that happen. The idea that, particularly on the consumer class of IoT type devices, we have devices out there with IP interfaces on them, and many of the manufacturers just haven’t had a thought of how they are going to patch something in the field, I think should scare us all to some degree.

Maybe it is, as Mary Ann mentioned, the idea that there are certain systemic risks that are out there that we just have to sort of nod our head and say that that’s the way it is. But certainly around really critical kinds of IoT applications, we need to take what we've learned in the last ten years and apply it to this new class of devices.

New architectural approach

DeLong: I'd like to add to that. We need a new architectural approach for IoT that will help to mitigate the systemic risks. And echoing the concerns expressed by Mary Ann a few minutes ago, in 2014, Europol, which is an organization that tracks criminal  risks of various kinds, predicted by the end of 2014, murder by Internet, in the context of Internet of Things. It didn't happen, but they predicted it, and I think it's not farfetched that we may see it over time.

Lounsbury: What do we really know actually? Edna, do you have any reaction on that one?

Conway: Murder by Internet. That’s the question you gave me, thanks. Welcome to being a former prosecutor. The answer is on their derrieres. The reality is do we have any evidentiary reality to be able to prove that?

I think the challenge is one that's really well-taken, which is we are probably all in agreement on, the convergence of these devices. We saw the convergence of IT and OT and we haven't fixed that yet.

We are now moving with IoT into a scalability of the nature and volume of devices. To me, the real challenge will be to come up with new ways of deploying telemetry to allow us to see all the little crevices and corners of the Internet of Things, so that we can identify risks in the same way that we have. We haven't mastered 100 percent, but we've certainly tackled predominately across the computer networks and the network itself and IT. We're just not there with IoT.

Mezzapelle: Edna, it also brings to mind another thing -- we need to take advantage of the technology itself. So as the data gets democratized, meaning it's going to be everywhere -- the velocity, volume, and so forth -- we need to make sure that those devices can maybe be self-defendable, or maybe they can join together and defend themselves against other things.

The real challenge will be to come up with new ways of deploying telemetry to allow us to see all the little crevices and corners of the Internet of Things.

So we can't just apply the old-world thinking of being able to know everything and control everything, but to embed some of those kinds of characteristics in the systems, devices, and sensors themselves.

Lounsbury: We've heard about the need. In fact, Ron Ross mentioned the need for increased public-private cooperation to address the cybersecurity threat. Ron, I would urge you to think about including voluntary consensus standards organizations in that essential partnership you mentioned to make sure that you get that high level of engagement, but of course, this is a broad concern to everybody.

President Obama has made a call for legislation on enabling cybersecurity and information sharing, and one of the points within that was shaping a cyber savvy workforce and many other parts of public-private information sharing.

So what more can be done to enable effective public-private cooperation on this and what steps can we, as a consensus organization, take to actually help make that happen? Mary Ann, do you want to tackle that one and see where it goes?

Collaboration is important

Mezzapelle: To your point, collaboration is important and it's not just about the public and the private partnership. It also means within an industry sector or in your supply chain and third-party. It's not just about the technology; it's also about the processes, and being able to communicate effectively, almost at machine speed, in those areas.

So you think about the people, the processes, and the technology, I don't think it's going to be solved by government. I think I agree with the previous speakers when they were talking about how it needs to be more hand-in-hand.

There are some ways that industry can actually lead that. We have some examples, for instance what we are doing with the Healthcare Forum and with the Mining and Minerals Forum. That might seem like a little bit, but it's that little bit that helps, that brings it together to make it easier for that connection.

It's also important to think about, especially with the class of services and products that are available as a service, another measure of collaboration. Maybe you, as a security organization, determine that your capabilities can't keep up with the bad guys, because  they have more money, more time, more opportunity to take advantage, either from a financial perspective or maybe even from a competitive perspective, for your intellectual property.

You need those product vendors or you might need a services vendor to really be able to fill in the gaps, so that you can have that kind of thing on demand.

You really can't do it yourself. You need those product vendors or you might need a services vendor to really be able to fill in the gaps, so that you can have that kind of thing on demand. So I would encourage you to think about that kind of collaboration through partnerships in your whole ecosystem.

DeLong: I know that people in the commercial world don't like a lot of regulation, but I think government can provide certain minimal standards that must be met to raise the floor. Not that companies won't exceed these and use that as a competitive basis, but if minimum is set in regulations, then this will raise the whole level of discourse.

Conway: We could probably debate over a really big bottle of wine whether it's regulation or whether it's collaboration. I agree with Mary Ann. I think we need to sit down and ask what are the biggest challenges that we have and take bold, hairy steps to pull together as an industry? And that includes government and academia as partners.

But I will give you just one example: ECIDs. They are out there and some are on semiconductor devices. There are some semiconductor companies that already use them, and there are some that don't.

A simple concept would be if we could make sure that those were actually published on an access control base, so that we could go and see whether the ECID was actually utilized, number one.

Speeding up standards

Lounsbury: Okay, thanks. Jim, I think this next question is about standards evolution. So we're going to send it to someone from a standards organization.

The cyber security threat evolves quickly, and protection mechanisms evolve along with them. It's the old attacker-defender arms race. Standards take time to develop, particularly if you use a consensus process. How do we change the dynamic? How do we make sure that the standards are keeping up with the evolving threat picture? And what more can be done to speed that up and keep it fresh?

Hietala: I'll go back to a series of workshops that we did in the fall around the topic of security automation. In terms of The Open Group's perspective, standards development works best when you have a strong customer voice expressed around the pain points, requirements, and issues.

We did a series of workshops on the topic of security automation with customer organizations. We had maybe a couple of hundred inputs over the course of four workshops, three physical events, and one that we did on the web. We collected that data, and then are bringing it to the vendors and putting some context around a really critical area, which is how do you automate some of the security capabilities so that you are responding faster to attacks and threats.

Standards development works best when you have a strong customer voice expressed around the pain points, requirements, and issues.

Generally, with just the idea that we bring customers into the discussion early, we make sure that their issues are well-understood. That helps motivate the vendor community to get serious about doing things more quickly.

One of the things we heard pretty clearly in terms of requirements was that multi-vendor interoperability between security components is pretty critical in that world. It's a multi-vendor world that most of the customers are living with. So building interfaces that are open, where you have got interoperability between vendors, is a really key thing.

DeLong: It's a really challenging problem, because in emerging technologies, where you want to encourage and you depend upon innovation, it's hard to establish a standard. It's still emerging. You don't know what's going to be a good standard. So you hold off and you wait and then you start to get innovation, you get divergence, and then bringing it back together ultimately takes more energy.

Lounsbury: Rance, since you have got the microphone, how much of the current cybersecurity situation is attributed to poor blocking and tackling in terms of the basics, like doing security architecture or even having a method to do security architecture, things like risk management, which of course Jim and the Security Forum have been looking into? And not only that, what about translating that theory into operational practice and making sure that people are doing it on a regular basis?

DeLong: A report I read on SANs, a US Government issued report on January 28 of this year, said that that many, or most, or all of our critical weapons systems contain flaws and vulnerabilities. One of the main conclusions was that, in many cases, it was due to not taking care of the basics -- the proper administration of systems, the proper application of repairs, patches, vulnerability fixes, and so on. So we need to be able to do it in critical systems as well as on desktops.

Open-source crisis

Mezzapelle: You might consider the open-source code crisis that happened over the past year with Heartbleed, where the benefits of having open-source code is somewhat offset by the disadvantages.

That may be one of the areas where the basics need to be looked at. It’s also because those systems were created in an environment when the threats were at an entirely different level. That’s a reminder that we need to look to that in our own organization.

Another thing is in mobile applications, where we have such a rush to get out features, revs, and everything like that, that it’s not entirety embedded in the system’s lifecycle or in a new startup company. Those are the some of the other basic areas where we find that the basics, the foundation, needs to be solidified to really help enhance the security in those areas.

Hietala: So in the world of security, it can be a little bit opaque, when you look at a given breach, as to what really happened, what failed, and so on. But enough information has come out about some of the breaches that you get some visibility into what went wrong.

Attend The Open Group Baltimore 2015
July 20-23, 2015
Early bird registration ends June 19

Of the two big insider breaches -- WikiLeaks and then Snowden -- in both cases, there were fairly fundamental security controls that should have been in place, or maybe were in place, but were poorly performed, that contributed to those -- access control type things, authorization, and so on.

Even in some of the large retailer credit card breaches, you can point to the fact that they didn’t do certain things right in terms of the basic blocking and tackling.

There's a whole lot of security technology out there, a whole lot of security controls that you can look to, but implementing the right ones for your situation, given the risk that you have and then operating them effectively, is an ongoing challenge for most companies.

Mezzapelle: Can I pose a question? It’s one of my premises that sometimes compliance and regulation makes companies do things in the wrong areas to the point where they have a less secure system. What do you think about that and how that impacts the blocking and tackling?

Hietala: That has probably been true for, say, the four years preceding this, but there was a study just recently -- I couldn’t tell you who it was from -- but it basically flipped that. For the last five years or so, compliance has always been at the top of the list of drivers for information security spend in projects and so forth, but it has dropped down considerably, because of all these high profile breaches. Senior executive teams are saying, "Okay, enough. I don’t care what the compliance regulations say, we're going to do the things we need to do to secure our environment." Nobody wants to be the next Sony.

Mezzapelle: Or the Target CEO who had to step down. Even though they were compliant, they still had a breach, which unfortunately, is probably an opportunity at almost every enterprise and agency that’s out there.

The right eyeballs

DeLong: And on the subject of open source, it’s frequently given as a justification or a benefit of open source that it will be more secure because there are millions of eyeballs looking at it. It's not millions of eyeballs, but the right eyeballs looking at it, the ones who can discern that there are security problems.

It's not necessarily the case that open source is going to be more secure, because it can be viewed by millions of eyeballs. You can have proprietary software that has just as much, or more, attention from the right eyeballs as open source.

Mezzapelle: There are also those million eyeballs out there trying to make money on exploiting it before it does get patched -- the new market economy.

Lounsbury: I was just going to mention that we're now seeing that some large companies are paying those millions of eyeballs to go look for vulnerabilities, strangely enough, which they always find in other people’s code, not their own.

It's not millions of eyeballs, but the right eyeballs looking at it, the ones who can discern that there are security problems.

Mezzapelle: Our Zero Day Initiative, that was part of the business model, is to pay people to find things that we can implement into our own products first, but it also made it available to other companies and vendors so that they could fix it before it became public knowledge.

Some of the economics are changing too. They're trying to get the white hatter, so to speak, to look at other parts that are maybe more critical, like what came up with Heartbleed.

Lounsbury: On that point, and I'm going to inject a question of my own if I may, on balance, is the open sharing of information of things like vulnerability analysis helping move us forward, and can we do more of it, or do we need to channel it in other ways?

Mezzapelle: We need to do more of it. It's beneficial. We still have conclaves of secretness saying that you can give this information to this group of people, but not this group of people, and it's very hard.

In my organization, which is global, I had to look at every last little detail to say, "Can I share it with someone who is a foreigner, or someone who is in my organization, but not in my organization?" It was really hard to try to figure out how we could use that information more effectively. If we can get it more automated to where it doesn't have to be the good old network talking to someone else, or an email, or something like that, it's more beneficial.

And it's not just the vulnerabilities. It's also looking more towards threat intelligence. You see a lot of investment, if you look at the details behind some of the investments in In-Q-Tel, for instance, about looking at data in a whole different way.

So we're emphasizing data, both in analytics as well as threat prediction, being able to know where some thing is going to come over the hill and you can secure your enterprise or your applications or systems more effectively against it.

Open sharing

Lounsbury: Let’s go down the row. Edna, what are your thoughts on more open sharing?

Conway: We need to do more of it, but we need to do it in a controlled environment.

We can get ahead of the curve with not just predictive analysis, but telemetry, to feed the predictive analysis, and that’s not going to happen because a government regulation mandates that we report somewhere.

So if you look, for example, DFARS, that came out last year with regard to concerns about counterfeit mitigation and detection in COTS ICT, the reality is not everybody is a member of GIDEP, and many of us actually share our information faster than it gets into GIDEP and more comprehensively.

I will go back to it’s rigor in the industry and sharing in a controlled environment.

There is a whole black market that has developed around those things, where nations are to some degree hoarding them, paying a lot of money to get them, to use them in cyberwar type activities.

Lounsbury: Jim, thoughts on open sharing?

Hietala: Good idea. It gets a little murky when you're looking at zero-day vulnerabilities. There is a whole black market that has developed around those things, where nations are to some degree hoarding them, paying a lot of money to get them, to use them in cyberwar type activities.

There's a great book out now called ‘Zero Day’ by Kim Zetter, a writer from Wired. It gets into the history of Stuxnet and how it was discovered, and Symantec, and I forget the other security researcher firm that found it. There were a number of zero-day vulnerabilities there that were used in an offensive cyberwar a capacity. So it’s definitely a gray area at this point.

DeLong: I agree with what Edna said about the parameters of the controlled environment, the controlled way in which it's done. Without naming any names, recently there were some feathers flying over a security research organization establishing some practices concerning a 60- or 90-day timeframe, in which they would notify a vendor of vulnerabilities, giving them an opportunity to issue a patch. In one instance recently, when that time expired and they released it, the vendor was rather upset because the patch had not been issued yet. So what are reasonable parameters of this controlled environment?

Supply chains

Lounsbury: Let’s move on here. Edna, one of the great quotes that came out of the early days of OTTF was that only God creates something from nothing and everybody else is on somebody’s supply chain. I love that quote.

But given that all IT components, or all IT products, are built from hardware and software components, which are sourced globally, what do we do to mitigate the specific risks resulting from malware and counterfeit parts being inserted in the supply chain? How do you make sure that the work to do that is reflected in creating preference for vendors who put that effort into it?

Conway: It's probably three-dimensional. The first part is understanding what your problem is. If you go back to what we heard Mary Ann Davidson talk about earlier today, the reality is what is the problem you're trying to solve?

I'll just use the Trusted Technology Provider Standard as an example of that. Narrowing down what the problem is, where the problem is located, helps you, number one.

We have a tendency to think about cyber in isolation from the physical, and the physical in isolation from the cyber, and then the logical.

Then, you have to attack it from all dimensions. We have a tendency to think about cyber in isolation from the physical, and the physical in isolation from the cyber, and then the logical. For those of us who live in OT or supply chain, we have to have processes that drive this. If those three don't converge and map together, we'll fail, because there will be gaps, inevitable gaps.

For me, it's identifying what your true problem is and then taking a three-dimensional approach to make sure that you always have security technology, the combination of the physical security, and then the logical processes to interlock and try to drive a mitigation scheme that will never reduce you to zero, but will identify things.

Particularly think about IoT in a manufacturing environment with the right sensor at the right time and telemetry around human behavior. All of a sudden, you're going to know things before they get to a stage in that supply chain or product lifecycle where they can become devastating in their scope of problem.

DeLong: As one data point, there was a lot of concern over chips fabricated in various parts of the world being used in national security systems. And in 2008, DARPA initiated a program called TRUST, which had a very challenging objective for coming up with methods by which these chips could be validated after manufacture.

Just as one example of the outcome of that, under the IRIS Program in 2010, SRI unveiled an infrared laser microscope that could examine the chips at the nanometer level, both for construction, functionality, and their likely lifetime -- how long they would last before they failed.

Lounsbury: Jim, Mary Ann, reactions.

Finding the real problem

Mezzapelle: The only other thing I wanted to add to Edna’s comment was reiteration about the economics of it and finding where the real problem is. Especially in the security area, information technology security, we tend to get so focused on trying to make it technically pure, avoiding the most 100 percent, ultimate risk. Sometimes, we forget to put our business ears on and think about what that really means for the business? Is it keeping them from innovating quickly, adapting to new markets, perhaps getting into a new global environment?

We have to make sure we look back at the business imperatives and make sure that we have metrics all along the road that help us make sure we are putting the investments in the right area, because security is really a risk balance, which I know Jim has a whole lot more to talk about.

Hietala: The one thing I would add to this conversation is that we have sort of been on a journey to where doing a better job of security is a good thing. The question is when is it going to become a differentiator for your product and service in the market. For me personally, a bank that really gets online banking and security right is a differentiator to me as a consumer.

Consumers -- and they surveyed consumers in 27 countries -- think that governments and businesses are not paying enough attention to digital security.

I saw a study that was quoted this week at the World Economic Forum that said that, by 2:1 margin, consumers -- and they surveyed consumers in 27 countries -- think that governments and businesses are not paying enough attention to digital security.

So maybe that’s a mindset shift that’s occurring as a result of how bad cybersecurity has been. Maybe we'll get to the point soon where it can be a differentiator for companies in the business-to-business context and a business-to-consumer context and so forth. So we can hope.

Conway: Great point. And just to pivot on that and point out how important it is. I know that what we are seeing now, and it’s a trend, and there are some cutting-edge folks who have been doing it for a while, but most boards of directors are looking at creating a digital advisory board for their company. They're recognizing the pervasiveness of digital risk as its own risk that sometimes it reports up to the audit committee.

I've seen at least 20 or 30 in the last three months come around, asking, did you advise every board members to focus on this from multiple disciplines? If we get that right, it might allow us that opportunity to share the information more broadly.

Lounsbury: That’s a really interesting point, the point about multiple disciplines. The next question is unfortunately the final question -- or fortunately, since it will get you to lunch. I am going to start off with Rance.

At some point, the difference between a security vulnerability failure or other kind of failures all flow into that big risk analysis that a digital-risk management regime would find out. One of the things that’s going on across the Real-Time and Embedded Systems Forum is to look at how we architect systems for higher levels of assurance, not just security vulnerabilities, but other kinds of failures as well.

The question I will ask here is, if a system fails its service-level agreement (SLA) for whatever reason, whether it’s security or some other kind of vulnerability, is that a result of our ability to do system architecture or software created without provably secure or provably assured components or the ability of the system to react to those kind of failures? If you believe that, how do we change it? How do we accelerate the adoption of better practices in order to mitigate the whole spectrum of risk of failure of the digital enterprise?

Emphasis on protection

DeLong: Well, in high assurance systems, obviously we still treat them as very important detection of problems when they occur, recovery from problems, but we put a greater emphasis on prevention, and we try to put greater effort into prevention.

You mentioned provably secure components, but provable security is only part of the picture. When you do prove, you prove a theorem, and in a reasonable system, a system of reasonable complexity, there isn’t just one theorem. There are tens, hundreds, or even thousands of theorems that are proved to establish certain properties in the system.

It has to do with proofs of the various parts, proofs of how the parts combine, what are the claims we want to make for the system, how do the proofs provide evidence that the claims are justified, and what kind of argumentation do we use based on that set of evidence.

So we're looking at not just the proofs as little gems, if you will. A proof of a theorem  think of it as a gemstone, but how are they all combined into creating a system?

If a movie star walked out on the red carpet with a little burlap sack around her neck full of a handful of gemstones, we wouldn’t be as impressed as we are when we see a beautiful necklace that’s been done by a real master, who has taken tens or hundreds of stones and combined them in a very pleasing and beautiful way.

And so we have to put as much attention, not just on the individual gemstones, which admittedly are created with very pure materials and under great pressure, but also how they are combined into a work that meets the purpose.

And so we have assurance cases, we have compositional reasoning, and other things that have to come into play. It’s not just about the provable components and it’s a mistake that is sometimes made to just focus on the proof.

Attend The Open Group Baltimore 2015
July 20-23, 2015
Early bird registration ends June 19

Remember, proof is really just a degree of demonstration, and we always want some demonstration to have confidence in the system, and proof is just an extreme degree of demonstration.

Mezzapelle: I think I would summarize it by embedding security early and often, and don’t depend on it 100 percent. That means you have to make your systems, your processes and your people resilient.

This has been a special BriefingsDirect presentation and panel discussion from The Open Group San Diego 2015. Download a copy of the transcript. This follows an earlier discussion on cybersecurity standards for safer supply chains. Another earlier discussion from the event focused on synergies among major Enterprise Architecture frameworks. And a presentation by John Zachman, founder of the Zachman Framework.

Copyright The Open Group and Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Tags:  BriefingsDirect  cybersecurity  Dana Gardner  Dave Lounsbury  Edna Conway  Interarbor Solutions  Jim Hietala  Mary Ann Mezzapelle  Rance DeLong  The Open Group  The Open Group San Diego 2015 

Share |
PermalinkComments (0)
 

Big data helps Conservation International proactively respond to species threats in tropical forests

Posted By Dana L Gardner, Tuesday, May 26, 2015

This latest BriefingsDirect big data innovation discussion examines how Conservation International (CI) in Arlington, Virginia uses new technology to pursue more data about what's going on in tropical forests and other ecosystems around the world.

As a non-profit, they have a goal of a sustainable planet, but we're going to learn how they've learned to measure what was once unmeasurable -- and then to share that data to promote change and improvement.

Listen to the podcast. Find it on iTunes. Read a full transcript. Download the transcript. Get the mobile app for iOS or Android.


To learn how big data helps manage environmental impact, BriefingsDirect sat down with Eric Fegraus, Director of Information Systems at Conservation International.The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: First, tell us the relationship with technology. Conservation International recently announced HP Earth Insights. What is that all about?

Fegraus: HP Earth Insights is a partnership between Conservation International and HP and it's really about using technology to accelerate the work and impact of some of the programs within Conservation International. What we've been able to do is bring the analytics and a data-driven approach to build indices of wildlife communities in tropical forests and to be able to monitor them in near-real-time.

Fegraus

Gardner: I'm intrigued by this concept of being able to measure what was once unmeasurable. What do you mean by that?

Fegraus: This is really a telling line. We really don’t know what’s happening in tropical forests. We know some general things. We can use satellite imagery and see how forests are increasing or decreasing from year to year and from time period to time period. But we really don't know the finer scale measurements. We don't know what's happening within the forest or what animal species are increasing or are decreasing.

There's some technology that we have out in the field that we call camera traps, which take images or photos of the animals as they pass by. There are also some temperature sensors in them. Through that technology and some of the data analytics, we're able to actually evaluate and monitor those species over time.

Inference points

Gardner: One of the interesting concepts that we've seen is that for a certain quantity of data, let's say 10,000 data points, you can get magnitude of order more inference points. How does that work for you, Eric? Even though you're getting a lot of data, how does that translate into even larger insights?

Fegraus: We have some of the largest datasets in our field in terms of camera trapping data and wildlife communities. But within that, you also have to have a modeling approach to be able to utilize that data, use some of the best statistics, transform that into meaningful data products, and then have the IT infrastructure to be able to handle it and store it. Then, you need the data visualization tools to have those insights pop out at you.

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

Gardner: So, not only are you involved with HP in terms of the Earth Insights Project, but you're a consumer of HP technology. Tell us a little bit about Vertica and HP Haven, if that also is something you are involved with?

Fegraus: Yes. All of our servers are HP ProLiant servers. We've created an analytical space within our environment using the HP ProLiant servers, as well as HP Vertica. That's really the backbone of our analytical environment. We're also using R and we're now exploring with Distributed R within the Vertica context.

We’re using the HP Cloud for data storage and back up and we’re working on making the cloud a centerpiece for data exchange and analysis for wildlife monitoring. In terms of Haven, we're exploring other parts of Haven, in particular HP Autonomy, and a few other concepts, to help with unstructured data types.

What we want to do is get the best available data at the right spatial and temporal scales, the best science, and the right technology.

Gardner: Eric, let’s talk a little bit about what you get when you do good data analytics and how it changes the game in a lot of industries, not just conservation. I'm thinking about being able to project into people’s understanding of change.

So for someone to absorb an understanding that things need to happen in order for things to improve, there is a sense of convincing. What is big data bringing to the table for you when you go to governments or companies and try to promulgate change in these environments?

Fegraus: From our perspective, what we want to do is get the best available data at the right spatial and temporal scales, the best science, and the right technology. Then, when we package all this together, we can present unbiased information to decision makers, which can lead to hopefully good sustainable development and conservation decisions.

These decision makers can be public officials setting conservation policies or making land use decisions. They can be private companies seeking to value natural capital or assess the impacts of sourcing operations in sensitive ecosystems.

Of course, you never have control over which way legislation and regulations can go, but our goal is to bring that kind of factual information to the people that need it.

Astounding results

Gardner: And one of the interesting things for me is how people are using different data sets from areas that you wouldn't think would have any relationship to one another, but then when you join and analyze those datasets, you can come up with astounding results. Is this the case with you? Are you not only gathering your own datasets but finding the means to jibe that with other data and therefore come up with other levels of empirical analysis?

Fegraus: We are. A lot of the analysis today has been focused on the data that we've collected within our network. Obviously, there are a lot of other kinds of big data sets out there, for example, provided by governments and weather services, that are very relevant to what we're doing. We're looking at trying to utilize those data sets as best we can.

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

Of course, you also have to be careful. One of the key things we want to do is look for patterns, but we want to make sure that the patterns we're seeing, and the correlations we detect, all make sense within our scientific domain. You don’t want to create false correlations and improbable correlations.

Gardner: And among those correlations that you have been able to determine so far, about 12 percent of species are declining in the tropical forest. This information is thanks to your Tropical Ecology Assessment and Monitoring (TEAM) and HP Earth Insights. And there are many cases not yet perceived as being endangered. So maybe you could just share some of the findings, some of the outcome from all this activity.

Fegraus: We've actually worked up a paper, and that’s one of the insights. It’s telling, because species are ranked by “whether they are considered endangered or not.” So species that are considered “least concerned” according to the International Union for the Conservation of Nature (IUCN), we assume that they are doing okay.

So you wouldn’t expect to find that those species are actually declining. That can really serve as an early warning, a wake-up call, to protected-area managers and government officials in charge of those areas. There are actually some unexpected things happening here. The things that we thought were safe are not that safe.

Whether we are in the Amazon or whether we're in a forest in Asia or Indonesia, we can have results that are important locally

Gardner: And, for me, another telling indicator was that on an aggregate basis, some species are being measured and there isn’t any sense of danger or problem, but when you go localized, when you look at specific regions and ecosystems, you develop a different story. Was there an ability for your data gathering to give you more a tactical and insights that are specific?

Fegraus: That’s one of the really nice things about the TEAM Network, a partnership between Conservation International, the Wildlife Conservation Society and the Smithsonian Institution. In a lot of the work that TEAM does, we really work across the globe. Even though we're using the same methodologies, the same standards, whether we are in the Amazon or whether we're in a forest in Asia or Indonesia, we can have results that are important locally.

Then, as you aggregate them through sub-national level efforts, national-levels, or even continental levels, that's where we're trying to have the data flow up and down those spatial scales as needed.

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

For example, even though a particular species may be endangered worldwide we may find that locally, in a particular protected area, that species is stable. This provides important information to the protected area manager that the measures that are in place seem to be working for that species. It can really help in evaluating practices, measuring conservation goals and establishing smart policy.

Sense of confidence

Gardner: I've also spoken to some folks who express a sense of relief that they can go at whatever data they want and have a sense of confidence that they have systems and platforms that can handle the scale and the velocity of that data. It is sort of a freeing attitude that they don’t have to be concerned at the data level. They can go after the results and then determine the means to get the analysis that they need.

Is that something that you also share, that with your partnership with HP and with others, that this is about the determination of the analysis and the science, and you're not limited by some sort of speeds-and-feeds barrier?

The problem has really been bringing the technology, analytics, and tools to the programs that are mission critical, bringing all of this to business driven programs that are really doing the work.

Fegraus: This gets to a larger issue within the conservation community, the non-profits, and the environmental consulting firms. Traditionally, IT and technology has been all about keeping the lights on and making sure everyone has a laptop. There's a saying that people can share data, but the problem has really been bringing the technology, analytics, and tools to the programs that are mission critical, bringing all of this to business driven programs that are really doing the work.

One of the great outcomes of this is that we've pushed that technology to a program like TEAM and we're getting the cutting-edge technology that a program like TEAM needs into their hands, which has really changed the dynamic, compared to the status quo.

Gardner: So scale really isn't the issue any longer. It's now about your priorities and your requirements for the scientific activity?

Fegraus: Yes. It's making sure that technology meets the requirements in scientific and program objectives. And that's going to vary quite a bit depending on the program and the group that we were talking about, but ultimately it’s about enabling and accelerating the mission critical work of organizations like Conservation International.

Listen to the podcast. Find it on iTunes. Read a full transcript. Download the transcript. Get the mobile app for iOS or Android. Sponsor: HP.

You may also be interested in:

Tags:  big data  BriefingsDirect  Conservation International  Dana Gardner  data analytics  Eric Fegraus  HP  HP Vertica  Interarbor Solutions 

Share |
PermalinkComments (0)
 

Enterprises opting for converged infrastructure as stepping stone to hybrid cloud

Posted By Dana L Gardner, Thursday, May 21, 2015

In speaking with a lot of IT users, it has become clear to me that a large swath of the enterprise IT market – particularly the mid-market – falls in between two major technology trends.

The trends are server virtualization and hybrid cloud. IT buyers are in between – with one foot firmly into virtualization – but not yet willing to put the other foot down and commit to full cloud adoption.

IT organizations are well enamored of virtualization. They are so into the trend that many have more than 80 percent of their server workloads virtualized. They like hybrid cloud conceptually, but are by no means adopting it enterprise-wide. We’re talking less than 30 percent of all workloads for typical companies, and a lot of that is via shadow IT and software as a service (SaaS).

In effect, virtualization has spoiled IT. They have grown accustomed to what server virtualization can do for them – including reducing IT total costs – and they want more. But they do not necessarily want to wait for the payoffs by having to implement a lengthy and mysterious company-wide cloud strategy.

Respond to business needs faster
Simplify your IT infrastructure
Find out more about VSPEX BLUE

They want to modernize and simplify how they support existing applications. They want those virtualization benefits to extend to storage, backup and recovery, and be ready to implement and consume some cloud services. They want the benefits of software-defined data centers (SDDC), but they don’t want to invest huge amounts of time, money, and risk in a horizontal, pan-IT modernization approach. And they're not sure how they'll support their new, generation 3 apps. At least not yet.

So while IT and business leaders both like the vision and logic of hybrid cloud, they have a hard time convincing all IT consumers across their enterprise to standardize deployment of existing generation 2 workloads that span private and public cloud offerings.

But they're not sitting on their hands, waiting for an all-encompassing cloud solution miracle covered in pixie dust, being towed into town by a unicorn, either.

Benefits first, strategy second

I've long been an advocate of cloud models, and I fully expect hybrid cloud architectures to become dominant. Practically, however, IT leaders are right now less inclined to wait for the promised benefits of hybrid cloud. They want many of the major attributes of what the cloud models offer – common management, fewer entities to procure IT from, simplicity and speed of deployment, flexibility, automation and increased integration across apps, storage, and networking. They want those, but they're not willing to wait for a pan-enterprise hybrid cloud solution that would involve a commitment to a top-down cloud dictate.

Instead, we’re seeing an organic, bottom-up adoption of modern IT infrastructure in the form of islands of hyper-converged infrastructure appliances (HCIA). By making what amounts to mini-clouds based on the workloads and use cases, IT can quickly deliver the benefits of modern IT architectures without biting off the whole cloud model.

If the hyper-scale data centers that power the likes of Google, Amazon, Facebook, and Microsoft are the generation 3 apps architectures of the future, the path those organizations took is not the path an enterprise can – or should – take.

Your typical Fortune 2000 enterprise is not going to build a $3 billion state-of-the-art data center, designed from soup to nuts to support their specific existing apps, and then place all their IT eggs into that one data center basket. It just doesn’t work that way.

Your typical Fortune 2000 enterprise is not going to build a $3 billion state-of-the-art data center, designed from soup to nuts to support their specific existing apps, and then place all their IT eggs into that one data center basket.

There are remote offices with unique requirements to support, users that form power blocks around certain applications, bean counters that won’t commit big dollars. In a word, there are “political” issues that favor a stepping-stone approach to IT infrastructure modernization. Few IT organizations can just tell everyone else how they will do IT.

The constraints of such IT buyers must be considered as we try to predict cloud adoption patterns over the next few years. For example, I recently chatted with IT leaders in the public sector, at the California Department of Water Resources. They show that what drives their buying is as much about what they don’t have as what they do.

"Our procurement is much harder. Getting people to hire is much harder. We live within a lot of constraints that the private sector doesn’t realize. We have a hard time adjusting our work levels. Can we get more people now? No. It takes forever to get more people, if you can ever get them,” said Tony Morshed, Chief Technology Officer for the California Resources Data Center.

“We’re constantly doing more with less. Part of this virtualization is survivability. We would never be able to survive or give our business the tools they need to do their business without it. We would just be a sinking ship,” he said. “[Converged infrastructure like VMware’s] EVO:RAIL looks pretty nice. I see it as something that we might be able to use for some of our outlying offices, where we have around 100 to 150 people.

"We can drop something like that in, put virtual desktop infrastructure (VDI) on it, and deliver VDI services to them locally, so they don't have to worry about that traffic going over the wide area network (WAN).” [Disclosure: VMware is a sponsor of my BriefingsDirect podcasts].

The California Department of Water Resources has deployed VDI for 800 desktops. Not only is it helping them save money, it’s also used as a strategy for a remote access. They're in between virtualization and cloud, but they're heralding the less-noticed trend of tactical modernization through hyper-converged infrastructure appliances.

Indeed, VDI deployments that support as many as 250 desktops on a single VSPEX BLUE appliance at a remote office or agency, for example, allow for ease in administration and deployment on a small footprint while keeping costs clear and predictable. And, if the enterprise wants to scale up and out to hybrid cloud, they can do so with ease and low risk.

Stepping stone to cloud

At Columbia Sportswear, there is a similar mentality, of moving to cloud gradually while seeking the best of agile, on-premises efficiency and agility.

"With our business changing and growing as quickly as it is, and with us doing business and selling directly to consumers in over a hundred countries around the world, our data centers have to be adaptable. Our data and our applications have to be secure and available, no matter where we are in the world, whether you're on network or off-premises,” said Tim Melvin, Director of Global Technology Infrastructure at Columbia Sportswear.

"The software-defined data center has been a game-changer for us. It’s allowed us to take those technologies, host them where we need them, and with whatever cost configuration makes sense, whether it’s in the cloud or on-premises, and deliver the solutions that our business needs,” he said.

Respond to business needs faster
Simplify your IT infrastructure
Find out more about VSPEX BLUE

Added Melvin: "When you look at infrastructure and the choice between on-premise solutions, hybrid clouds, public and private clouds, I don't think it's a choice necessarily of which answer you choose. There isn't one right answer. What’s important for infrastructure professionals is to understand the whole portfolio and understand where to apply your high-power, on-premises equipment and where to use your lower-cost public cloud, because there are trade-offs in each case."

Columbia strives to present the correct tool for the correct job. For instance, they have completely virtualized their SAP environment to run on on-premises equipment. For .software development, they use a public cloud.

And so the stepping stone to cloud flexibility: To be able to run on-premise workloads like enterprise resource planning (ERP) and VDI with speed, agility, and low-cost. And to do so in such a way that some day those workloads could migrate to a public cloud, when that makes sense.

"The closer we get to a complete software-defined infrastructure, the more flexibility and power we have to remove the manual components, the things that we all do a little differently and we can't do consistently. We have a chance to automate more. We have the chance to provide integrations into other tools, which is actually a big part of why we chose VMware as our platform. They allow such open integration with partners that, as we start to move our workloads more actively into the cloud, we know that we won't get stuck with a particular product or a particular configuration,” said Melvin.

"The openness will allow us to adapt and change, and that’s just something you don't get with hardware. If it's software-defined, it means that you can control it and you can morph your infrastructure in order to meet your needs, rather than needing to re-buy every time something changes with the business,” he said.

SDDC-in-a-box

What we're seeing now are more tactical implementations of the best of what cloud models and hyper-scale data center architectures can provide. And we’re seeing these deployments on a use-case basis, like VDI, rather than a centralized IT mandate across all apps and IT resources. These deployments are so tactical that they consist in many cases of a single “box” – an appliance that provides the best of hyper scale and simplicity of virtualization with the cost benefits and deployment ease of a converged infrastructure appliance.

This tactical approach is working because blocks of users and/or business units (or locations) can be satisfied, IT can gain efficiency and retain control, and these implementations can eventually become part of the pan-IT hybrid cloud strategy. Mid-market companies like this model because it means the hyper-converged appliance box is the data center, it can scale down to their needs affordably – not box them in when the time comes to expand – or to move to a hybrid cloud model later.

What we're seeing now are more tactical implementations of the best of what cloud models and hyper-scale data center architectures can provide.

What newly enables this appealing stepping-stone approach to the hybrid cloud end-game? It’s the principles of SDDC – but without the data center. It’s using virtualization services to augment storage and back-up and disaster recovery (DR) without adopting an entire hybrid cloud model.

The numbers speak to the preferences of IT to adopt these new IT architectures in this fashion. According to IDC, the converged infrastructure segment of the IT market will expand to $17.8 billion in 2016 from $1.4 billion in 2013.

VSPEX BLUE is EVO:RAIL
plus EMC’s Management Products

A recent example of these HCIA parts coming together to serve the tactical apps support strategy and segue to the cloud is the EMC VSPEX BLUE appliance, which demonstrates a new degree to which total convergence can be taken.

The Intel x-86 Xeon off-the-shelf hardware went on sale in February, and is powered by VMware EVO:RAIL and EMC’s VSPEX BLUE Manager, an integrated management layer that brings entirely new levels of simplicity and deployment ease.

This bundle of capabilities extends the capabilities of EVO into a much larger market, and provides the stepping stone to hyper convergence across mid-market IT shops, and within departments or remote offices for larger enterprises. The VSPEX BLUE manager integrates seamlessly into EVO:RAIL, leveraging the same design principles and UI characteristics as EMC is known for.

What’s more, because EVO:RAIL does not restrict integrations, it can be easily extended via the native element manager. The notion of hyper-converged becomes particularly powerful when it’s not a closed system, but rather an extremely powerful set of components that adjust to many environments and infrastructure requirements.

VSPEX BLUE is based on VMware's EVO:RAIL platform, a software-only appliance platform that supports VMware vSphere hypervisors. By integrating all the elements, the HCIA offers the simplicity of virtualization with the power of commodity hardware and cloud services. EMC and VMware have apparently done a lot of mutual work to up the value-add to the COTS hardware, however.

The capabilities of VSPEX BLUE bring much more than a best-of-breed model alone; there is total costs predictability, simplicity of deployment and simplified means to expansion. This, for me, is where the software element of hyper-converged infrastructure is so powerful, while the costs are far below proprietary infrastructure systems, and the speed-to-value in actual use is rapid.

Respond to business needs faster
Simplify your IT infrastructure
Find out more about VSPEX BLUE

For example, VSPEX BLUE can be switched on and begin provisioning virtual machines in less than 15 minutes, says EMC. Plus, EMC integrates its management software to EMC Secure Remote Support, which allows remote system monitoring by EMC to detect and remedy failures before they emerge. So add in the best of cloud services to the infrastructure support mix.

Last but not least, the new VSPEX BLUE Market is akin to an “app store” and is populated with access to products and 24x7 support from a single vendor, EMC. This consumer-like experience of a context-appropriate procurement apparatus for appliances in the cloud is unique at this deep infrastructure level. It forms a responsive and well-populated marketplace for the validated products and services that admins need, and creates a powerful ecosystem for EMC and VMWare partners.

EMC and VMware seem to recognize that the market wants to take proven steps, not blind leaps. The mid-market wants to solve their unique problems. To start, VSPEX BLUE offers just three applications: EMC CloudArray Gateway, which helps turn public cloud storage into an extra tier of capacity; EMC RecoverPoint for Virtual Machines, which protects against application outages; and VMware vSphere Data Protection Advanced, which provides disk-based backup and recovery.

Future offerings may include applications such as virus-scanning tools or software for purchasing capacity from public cloud services, and they may come from third parties, but will be validated by EMC.

The way in which these HCIA instances are providing enterprises and mid-market organizations the means to adapt to cloud at their pace, with ease and simplicity, and to begin to exploit public cloud services that support on-premises workloads and reliability and security features, shows that the vendors are waking up. The best of virtualization and the best of hardware integration are creating the preferred on-ramps to the cloud.

Disclosure: VMware is a sponsor of BriefingsDirect podcasts that I host and moderate. EMC paid for travel and lodging for a recent trip I made to EMCWorld.

You may also be interested in:

Tags:  BriefingsDirect  Converged infrastructure  Dana Gardner  EMC  EVO:RAIL  HCIA  hybrid cloud  Interarbor Solutions  server virtualization  VMWare  VSPEX BLUE 

Share |
PermalinkComments (0)
 

Winning the B2B commerce game: What sales organizations should do differently

Posted By Dana L Gardner, Saturday, May 16, 2015

The next BriefingsDirect thought-leader interview focuses on what winning sales organizations do to separate themselves from the competition by creating market advantage through improved user experiences and better information services.

We'll hear from RAIN Group about a recent study on sales that uncovers what sales leaders do differently to foster loyalty and gain repeat business.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

And we'll also hear from National Business Furniture on how they're leveraging online business networks to enable more collaborative and innovative processes that enhance their relationships, improve customer satisfaction, and boost sales.

Please join our guests, Mike Schultz, President of RAIN Group, based in Framingham, Mass., and Brady Seiberlich, IT e-Procurement and Development Manager at National Business Furniture, based in Milwaukee. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's changing the B2B sales dynamic? What can we do about it?

Schultz: It's really interesting. In the world of sales, if you fell asleep in 1982, having just read a sales book, and woke up 30-something years later and in 2005 went back to work, you didn't miss anything. It didn't really change that much.

Schultz

But there are a couple of things that have been happening in the last 10 years or so that have been making sales a lot different. It has changed more in the last 10 years than it did in the previous 40. So let’s look at two of the things.

The first one is that buyers perceive the offerings that different companies bring to them to be somewhat similar, somewhat interchangeable. What that means is that the sellers are no longer competing on saying, "Hey, here is the product, here is the service, and here's the benefit it’s going to get for you," because the other guy has something that the buyer perceives to be the same.

What they're actually competing on now is how to use and how to apply those services and products so the company actually gets the greatest benefit from them. That’s not actually the power of the offering; that is the power of the ideas, the innovation, and the collaboration that the sellers are bringing to the table. So there's one thing.

The other thing is the asymmetry of information has been changing. It used to be very asymmetrical, because the buyer had all the need and all the desire, but the seller had all the knowledge. Now, buyers can hop online and talk to user groups who have bought from you and see what everyone says about your pricing, and they can find your competitors really quickly. They can get a lot more information.

So there has been a leveling of the playing field, which brings us back to point number one. If the sellers want to compete, they have to be smarter than the average bear, smarter than they used to be. They used to be able to just take orders; they can't do that anymore and still win.

Gardner: Brady, is that what you're facing? What do you do differently about this new sales dynamic?

Seiberlich

Seiberlich: I definitely agree with Mike. In the last couple of years, buyers are getting smarter. They're trying to challenge us more. With the Internet, they have the ability to easily price compare, shop products, look at product reviews. They're so much more knowledgeable now.

Another thing that we found with our buyers is that they want the ordering process to be as easy as possible, whether it's through the Internet or an e-procurement system. You have to work a lot harder to make sure the buyer finds you as the easiest way to order.

We've really had to work hard at that and we've had to be able to adjust, because every buyer has needs and they all have different needs. We want to make sure we can cover as many different needs without doing a user experience customization for everybody.

The experience is important

Gardner: It sounds as if the experience of buying and procuring is as important as what you're buying.

Schultz: That’s actually what we found from our research. I said that sales has changed in the last 10 years more than it's changed in the last 40. Yet our industry is very sleepy. Most people do the same thing in terms of what they profess to be what's important, to a whole bunch of people saying a whole bunch of different things. It goes all the way up to the Harvard Business Review saying that solution sales is at its end.

They published an article, The End of Solution Sales, and they published an article, Selling Is Not About Relationships. So is this true? What's actually going on?

We did the study where we looked at 700 business-to-business (B2) purchases from buyers who represented $3.1 billion of purchasing power. We wanted to find out what was the buyer's experience like from the seller they awarded the business to, to the seller that came in almost there, but came in second place. When you sell, person in first place gets the trip to Aruba, and the second place person gets the trip back to their office.

Sellers that win don’t just sell differently; they sell radically differently than the sellers that even come in the closest second place.

What we found first of all, is that the sellers that win don’t just sell differently; they sell radically differently than the sellers that come in the closest second place. [Get a free copy of the RAIN Group report, What Sales Winners Do Differently.]

The product and service playing field was perceived to be that the buyer is similar, especially by the time they get to the last two. Maybe they kicked out some lesser providers early, and when they get down to the end, both providers provide the technology, they can both engineer the playing field that we're building, and they can both do the thing that we need them to do.

It actually came down to the buyer experience with the seller and how the seller treated the buyer. What they did with the buyer were the tipping points for why they got awarded the business.

Gardner: Brady, what has changed in terms of your creating a better experience, a simple, direct, maybe even informative process for your customers? How do you accommodate what we have been talking about in terms of improved experience?

Flexible as possible

Seiberlich: We try to be as flexible as possible and we try to provide them with as much information as possible.

Information is huge for us. Back in the days when we first started, we mailed catalogs. For each piece of our furniture that we sell, you probably saw in the catalog seven pieces of information: how big it was, how much it weighed, what colors it came in.

Right now, for every piece of furniture we have, we hold over a 100 pieces of information on it and we display a lot of that on the web. It's an ergonomic chair, it’s leather, it raises up and down, it comes with or without arm, things like that. We try to provide as much information, because the shopper works harder.

In the days of a catalog, where you had a catalog at your desk and you opened it up, there was no competition there. On the web, there's plenty of competition and everybody is trying to compete for that same dollar.

We try to be as flexible as possible and we try to provide them with as much information as possible.

We want to make the customer as informed as possible. The customer doesn’t want to necessarily have to call us and say, "Is this brown; how dark is this brown?" We want to give them as much information as possible and inform them, because they want to make the decision themselves and be done with it. We're trying to get better at that.

Gardner: I believe you are in your 40th year now at National Business Furniture. Tell us a little about your company: your scale, where you do business, and what it is precisely that you are selling?

Seiberlich: That is correct. This year we are celebrating our 40th anniversary, which is pretty exciting for us. We sell in the US and in Canada. We opened our first office in Canada a couple of years ago.

The main reason we mainly sell in the US market is because of what we sell. We sell office furniture: desks, chairs, and bookcases. That stuff is too heavy to ship overseas, and we can't compete with some of the vendors that are over there already selling. So we sell here in the US mostly. The majority of our business obviously comes from there.

We started as 100 percent catalog. In the early '90s we made a website that was just for browsing purposes. You couldn't shop off of it. In the late 1990s we added the ability to buy off of it, and right now we're up to about a 50/50 split in what comes through the catalog and what comes through via e-commerce. And in e-commerce, we include the Internet, the e-procurement system, and stuff like that.

So we've proven that we're still adjusting with it, but the weird thing is that some of our product lines haven’t changed that much. Traditional furniture is still traditional furniture. We are selling some very similar products, just 40 years later.

Different approach

Gardner: Given this change in the environment with the emphasis on experience and data, making good choices with a lot of different possible choices, if you're a buyer, what are you doing differently in order to keep your business healthy?

Is this a matter of having more strategic long-term predictable sales? Do you go about marketing in a different way? Have you changed the actual selling process in some fashion? How are you adjusting?

Seiberlich: Probably all of the above. We're always looking for new markets to sell to. We've just started to move into medical furniture and we're doing some new things there.

The government has different rules in buying. So we're tying to make sure that we can adhere to those and make sure that’s an open market for us. And we continue to just try and find better ways to do things. That's what separates us from our competitors.

The days of establishing a relationship and just hoping that will carry you for years have kind of come and gone.

Everyone who sells office furniture is all selling similar products, around the same price. So we have to do something to differentiate ourselves, and we do that. We try to make the process easy, we try to provide the customer with as much information as possible, and we just want to make it a smooth process.

The days of establishing a relationship and just hoping that will carry you for years, like Mike said, have kind of come and gone. So we've got to work harder to keep our existing customers. We're doing that and also trying to find ways to find new customers, too.

Gardner: We are here at Ariba LIVE. We're hearing a lot about business networks, end-to-end processes, using different partners and different suppliers to create a solution within that end-to-end process. What is it about business networks that helps you attain your goals of a smoother data-driven process for sales?

Seiberlich: When you can prove that you can collaborate over these networks, you have a success that you can show to other buyers. You can say, "We've proven we can do this." It shows that you have established yourselves in these different markets.

I'm sure everybody knows that nobody wants to be the guinea pig and try something new with somebody else. But we've proven that we can work on these different markets and different networks and continue to try to find ways to make it easier. That’s what we're really pushing.
Unpacking the term

Schultz: Dana, I wanted to add one quick thing on that. "Network" is one of those interesting words that you can unpack. You can unpack it in the technology sense that things are networked, but there's also the concept of a network that says that on the other side of this technology, there are people.

As a seller what it does, when what you do here isn’t just what you do there, it starts to go out through technology to other people and it amplifies whatever you do.

So, if you're doing a pretty bad job, people are going to hear that it’s a pretty bad job a lot faster than they used to. But if you are doing something interesting, if you are doing something worthwhile, if you are doing something like Brady is talking about, saying, "Wow, this process really used to be a pain and now it's a lot better because of the technology," that will get through to more people.

If you're doing the things that I talked about earlier, if you're selling in ways that help buyers get the most use out of what they you're selling, get the most benefit out of what you're selling, it’s no longer just words in a catalog saying, "This is how you're going to benefit."

If you're doing a pretty bad job, people are going to hear that it’s a pretty bad job a lot faster than they used to.

In some ways, you're going to benefit from working with us to get it, not just from the thing itself. The technology amplifies the good sellers, and they end up selling a lot more because it spreads faster.

Gardner: I suppose another part of the technology impact is convenience. When you're already in an environment, an application, a cloud, a network, maybe even a mobile interface, and the seller is in that same environment, if you are a buyer, that has some positive implications. Things can be integrated. Things can be repeatable. The data can be collected, shared, and analyzed.

Tell me a little bit, if you would, Brady, about being in a shared environment technically that also provides grease to the sales gears?

Seiberlich: It definitely does. We have some customers that we transact with here on Ariba, and in the the first one, two, or three transactions, we had to work through some difficulties, but by transaction 10, 15, or 20, it’s just smooth and it goes right through. And that's what we're trying to push with other customers that buy from us and we are trying to get them moved over to the network.

We have a proven track record here. We are the highest rated furniture provider here. We are gold from the Ariba standpoint. So we're trying to push customers to continue to buy from us off of these networks, because we've proven how simple it can be and we want to continue to do that. We want to make the ordering process as simple as possible.

Transaction algorithm

Gardner: Mike, maybe looking a bit forward, if all things become equal in terms of the product and the information that’s available, if we take that to its fullest extent, it really becomes a transactional efficiency, even down to compressing the payment, schedule, and negotiating vis-à-vis actual transactions on a larger and larger scale. Where do we end up? Do sales go well together and it simply becomes a transaction algorithm?

Schultz: There were predictions about 10 years ago with e-commerce when the information symmetry really started to happen, when it shifted toward buyers. They started to know more that there were going to be fewer salespeople in the US, because of government data, the US economy.

US government data said that 1 out of 9 people working in the United States were in sales; that was in 2000. If you fast forward to now, the massive change has been that there are about 1 in 9 people working in sales. So it hasn’t changed; it’s just that they're not order takers anymore.

The other thing is, is that while things look the same, they still aren't always necessarily the same.

So the new challenge for buyers is to figure out what are the differences.

If you think about it, all this becomes price pressure. If this goes directly to microeconomics, and we are just buying commodity pork bellies, it has to be the exact same price because the elasticity works that way. Any shift is going to make it go to a different provider. That’s really not the case, because we're not all buying pork bellies.

I don’t know about you, but I don’t think that Brady is looking really well. Maybe he needs some heart surgery. I have a really cheap surgeon. Would you like to go see him. He's board-certified, and he is a really cheap heart surgeon? It’s like, oh jeez.

There is a lot of decision process and a lot of mental things built up about what cheap-versus-expensive means, especially because if you are not talking about pork bellies, it's not necessarily the same.

So the new challenge for buyers is to figure out what are the differences. This law firm says they have the same capabilities at that law firm, but in fact, one law firm is better. The question is how. It’s contingent on buyers and sellers to figure that out together.

That’s why for law firms, consulting firms, accounting firms, I can't sink my teeth into them, bite them, and tell you which one is thicker or stronger, or which is going to have a 20-year guarantee versus a 10-year guarantee on the chair. I'm just trying to figure out who is actually better, who can serve me better, and who is the right fit. So it's not all commodities.

One other challenge, if you think about it from the buying side, is that it's not a big secret that heading into the purchasing department is not necessarily the absolute positive I am dying to do a career path for the top MBAs that are coming out of the top schools.

Complicated purchases

There are some great people in purchasing, but a lot of the times, when we're talking to sellers, we're talking to sellers that are doing $5 and $10 million on very complicated things with buyers, and the purchasing person they're working with doesn't actually understand the business context of what they're trying to get done. So they're asking, "How do I actually get to interact with them when the rules are they don't let me talk to them?" This is $7 million. They're buying this like they're buying roofing shingles.

It's going to require much more sophistication from the buyers to figure out what they really need and what are really the quality levels as it is on the sellers to make sure that they bring forth the right ideas, craft the right solutions, and treat the buyers well.

Gardner: So clearly we've hit on that reputation being in an open visible network where information can be traded. That gets to that reputation, trust, and a track record. But it also sounds like we're talking about some sort of value-add to the buy.

And that’s one of our  biggest selling points -- our people. That’s an important thing for us. They have the knowledge that they need and they're not just order takers.

If other things are equal -- but the experience of buying, if making a decision in a complex environment is the case -- something else is needed, perhaps consulting, data, or analysis. So, Brady, what is potentially a value-add in your business to increase your likelihood of making the sale and then keeping that relationship with the buyer?

Seiberlich: We have a couple of things, but one of our most important things is that we've been around for 40 years. If you call either inside sales or a customer service, you're going to get somebody, on average, with over 10-plus years of furniture experience with us, and that's a big thing. They understand our products. Our vendors come into our office weekly and explain our products. Our salespeople know the products and they can really help you find a solution that fits you.

And that’s one of our  biggest selling points -- our people. That’s an important thing for us. They have the knowledge that they need and they're not just order takers. They're much more. Everybody on our side who answers the phone are furniture experts. That’s what they do.

Gardner: Do you find that those salespeople with that track record, with that depth of knowledge, are taking advantage of things like the Ariba Network to get more data, more analysis to help them? Have they made that leap yet to being data driven, rather than just experience driven?

Seiberlich: We're getting better and we're consistently improving.

I agree with Mike’s point, one of the hardest things is making sure that we align ourselves with our buyers’ needs, figuring out what’s important to them and then making sure we are addressing those situations. That’s a challenge, and when you figure it out today, it changes tomorrow. That makes it even more challenging.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

You may also be interested in:

Tags:  Ariba  Ariba LIVE  Ariba Network  Brady Seiberlich  BriefingsDirect  Dana Gardner  Interarbor Solutions  Mike Schultz  National Business Furniture  RAIN Group 

Share |
PermalinkComments (0)
 

Ariba's product roadmap for 2015 leads to improved business cloud services

Posted By Dana L Gardner, Monday, May 04, 2015

The next BriefingsDirect thought-leader interview focuses on the Ariba product roadmap for 2015 -- and beyond.

Ariba’s product and services roadmap is rapidly evolving, including improved business cloud services, refined user experience features, and the use of increasingly intelligent networks. BriefingsDirect had an opportunity to learn first-hand how at the recent 2015 Ariba LIVE Conference in Las Vegas.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more about the recent news at Ariba LIVE -- and also what to expect from both Ariba and SAP in the coming months -- we sat down with Chris Haydon, Senior Vice President of Product Management at Ariba, an SAP company. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Before we get to the Ariba news, what do you see as having changed, developed, or evolved over the past year or so in the business network market?

Haydon: It’s been a very interesting year with a lot of learning and adoption for sure. There's a growing realization in companies that the networked economy is here to stay. You can no longer remain within the four walls of your business.

It really is about understanding that you are part of multiple business networks, not just a business network. There are business networks for finance, business networks for procurement, and so on. How do you leverage and harness those business networks to make your businesses more effective?

Balancing needs

Gardner: So it’s incumbent upon companies to take advantage of all these different networks and services, the data and analysis that’s driven from them. But at the same time, they need to retain simplicity. They need to make their users comfortable with this technology. They need to move toward a more mobile interface.

Haydon

How do we balance the need for expansion, amid complexity, with simplicity and direct processes?

Haydon: It’s a difficult balance. There are a couple of ways to think about it as well. Just to pick up on the point on how businesses are changing, certainly the end-user expectation is dramatically changing. Whether it’s the millennials coming into the workforce, the nature of apps, mobile apps, in our personal lives driving the need, the requirement, the desire to have that in our business lives is there.

From an Ariba perspective, we believe our job is to manage complexity for our customers. That’s the value prop that people sign into. When we talk from a usability perspective about managing the complexity, it’s also about thinking about the individual persona or how the end-user really needs to interact to get the work done, how they can learn, and how they can use their different devices to work where they want and how they want.

Gardner: It seems to me that among the technology leaps that we are making in order to accommodate this balance is there a greater reliance on the network, network-centric attributes -- intelligence driven into the network. How do you view the role of the network in this balance?

Haydon: I think it’s fundamental, and we're definitely seeing it almost as a tipping point. It’s no longer just about the transactions, but about the trusted relationships. It’s about visibility into the extended value chain, whether that value chain is supply chain, financial payment chain, or logistics chain. It doesn’t matter what that process change or that value change is. It is insight into that trusted community, so you understand that it’s secure, that it’s scalable, and also that reliable and repeatable.

It’s no longer just about the transactions, but about the trusted relationships.

Gardner: It seems like we can put the word "hybrid" in front of so much these days. Tell us a little bit about why SAP HANA is so important to this network services tipping point. Many people think of HANA as a big data platform, but it’s quite a bit more.

Haydon: Yeah, it is. In Ariba we've made strides on leveraging the HANA Technology, first with the Spend Visibility program. The great message about HANA is that it's not HANA for technology sake; it’s how HANA enables different business outcomes. That's the exciting thing. Whether it's on the Ariba Network, whether we start in our analytical platform and have an average of 50X or 80X average improvement in terms of some of the reports, that’s great.

What was really interesting when we put HANA on to our Spend Visibility was that we got more users doing different types of reports because they could do this, they could iterate, they could change, they could experiment in a more interactive and faster way. We saw upticks in the behavior of how customers use their products, and that's the excitement of HANA.

Taking it to the next step, as we looked upon HANA across our network and our other applications in terms of better and different types of reporting in terms of the network and having real-time visibility in insights from our trusted community, it’s just going to provide a differential level of value to any of the end-users, whether they're buyer, seller or any of our partners.

Wider diversity

Gardner: So we have a wider diversity of network participants. We need to connect them. We’re leveraging the network to do that. We're leveraging the ability of a strong platform like HANA to give us analytics and other strong value adds, but we also need to bring that single platform, that single pane of glass value, to the mobile device.

User experience seems to be super important. Tell us a little bit about where you’re heading with that and introduce us to SAP Fiori.

Haydon: It’s a massive focus for us from an innovation perspective.

When we think about our user experience, it's not just about the user interface, albeit a very important part, but it's also the total experience that an end-user or a company has with the Ariba Suite and Business Network.

Fiori is an excellent user-interface design paradigm that SAP has led, and we have adopted, Fiori elements and design paradigms within our applications, mobile applications as well as desktop applications.

You will see a vastly updated user interface, based on Fiori design principles, coming out in the summer, and we'll be announcing that here at Ariba LIVE and taking customers through some really interesting demos. But, as you mentioned earlier, it's not just about the user experience. It's really about end users; we call them personas from a product perspective. You're in accounts payable or you're a purchasing officer. That’s the hat you wear.

It really is about how you link, where you work, work anywhere, embracing modern design principles and learning across the whole user experience.p>

It really is about how you link, where you work, work anywhere, embracing modern design principles and learning across the whole user experience. We've got some interesting approaches for our mobile device. Let me talk about the crossover there.

We're launching and showing a new mobile app. We launched our mobile app early this year for Ariba’s Procurement suite. I had some great uptick the first week, when 20 percent of our customers activated and rolled it out. Some of their end-users are progressively scaling that. Again, that's the power of a mobile-app delivery. It shows the untapped demand, the untapped potential, of how end-users do, can, and want to interact with business applications today.

At Ariba LIVE 2015, we are also announcing a brand-new application to enable shopping cart, adding, searching for the casual ad hoc end-user, so they can do their requisitioning and their owning of the contract items or ad hoc items wherever they are.

To finish off, just as excitingly, we're really looking to leverage the mobile device and take its abilities to create new user experience design paradigms. Let me give you an example of what that means. Let’s just say you're an accounts payable clerk and you're a very conscientious accounts payable clerk. You're on the bus, on the way to work, and you know you have got a lot of invoices to process. For example, you might want to say you need to process an invoice from ACME Inc. before you do it for my next supplier.

On your mobile device, you can’t process detailed information about an invoice, but you can certainly put it in your queue, and when you get to your desktop, there it is at the top of your to-do queue.

Then, when you finish work, maybe you want to push a report on "How did I do today?" You did x things, you did y things, and you have that on your mobile device on the train on the way home. That's the kind of continuity construct that would bring you in, making the user experience about learning and about working where you are.

Behavioral aspect

Gardner: Before we go into the list of things that you're doing for 2015, let's tie this discussion at the high level about the networked economy, power network, and intelligence driven in the network, the user interface, with this all-important behavioral aspect of users wanting to use these technologies.

One of the things that’s been interesting for me at Ariba LIVE is that I'm learning that user productivity is the go-to market. The pull of users that say they want these apps, they don't want the old-fashioned way, they want to be able to do some work on the train ride home and have notifications that allow them to push a business process forward or send it back.

So how do you see the future of the total technology mix coming to bear on that user productivity in such a way that they're actually going to demand these capabilities from their employers?

Haydon: It's interesting. Let's just use the example of a Chief Procurement Officer. As Chief Procurement Officer, you may have the old classic standard benefits of the total cost of ownership (TCO), cost reduction, and price reductions. But more and more, Chief Procurement Officers also realize that they have internal customers, their end-users.

If the end users can't adopt the systems and comply with the systems, what's the point?

If the end users can't adopt the systems and comply with the systems, what's the point? So, just getting to your point, it was an excellent thing. We're seeing the pull or the push, depending on your point of view, straight from the end user, straight through to the end-of -line outcome.

From an Ariba perspective, how this all comes together really is a couple things. User design interactions are foremost in our design thinking approaches. These different user design interactions make products do different things and work together. It also has some great impact on our platform, and this is where with SAP and HANA Cloud Platform gives us a differential way to address these problems.

One of these aspects here is to keep up with these demands not necessarily out of left field, but out of specific market or industry requirements.

We need to make sure that we can expand our ecosystem from an Ariba perspective to encompass partners and even customers doing their own things with our platform that we don't even know about. For some specific investments with HANA and the HANA Cloud Platform it's to make our network more open and we're also looking at some targeted extensibility scenarios for real applications.

Gardner: Let's go to the road map for 2015 Ariba products. Let's start with Spend Management. What's going on there?

A lot of innovations

Haydon: In 2014, we brought more than 330 odd significant features to market, almost one a day. So we have delivered a lot of innovation.

About 89 percent of those were delivered -- and this is important to our ongoing roadmap because we're cloud -- because we work with our customers in their own on-demand environment. They entrust their business processes to us. We're delivering more and more features in toggle mode -- or configured on or configured off. We're letting our end users and our customers consume our innovation even though it's intrinsic to the product.

That's one big improvement we made in 2014 and we want to carry through in 2015. In terms of spend management, again, we have some great new investment in Ariba. SAP continues to invest in Ariba, and we continue to turn out more innovation.

We have some innovation from enhancing capabilities to support the public sector. We're adding and extending in globalization capabilities. We're adding specific functionality to improve the security, the encryption, of applications.

We have 16-odd years of transactional history on the Ariba Network. We look at that in conjunction with our customers.

Then, there are some more targeted features, whether it's improving demand aggregation for our procurement applications, supporting large line levels and outsourcing and contract management applications, or improving our catalog searching capabilities with type-ahead and improved content and publishing management. It's really end to end.

Gardner: There are sort of four buckets within the spend management, indirect, contingent labor, direct, and supply chain management issues. The new big one was the Concur acquisition, travel and expense. Anything new to offer on understanding better spend management, better spend visibility, across these buckets?

Haydon: Of course. When we work with our customers, we have 16-odd years of transactional history on the Ariba Network. We look at that in conjunction with our customers. We see these big four major spend segments, indirect and MRO, as you mentioned, supply chain indirect, services, contingent labor, travel and expense, and, of course, the distribution of that spend type changes per industry.

But what we're really focused on is making sure that we can get end-to-end outcomes for our customers across the source-to-pay process. I'll touch on all of them in turn.

In indirect MRO we're just continuing to drive deeper. We really want to address specific features in terms of compliance and greater spend categories, specifically with Spot Buy, which is a product we are out there trialing with a number of customers right now.

In contingent labor and services management, we've done some excellent work integrating the Ariba platform with the Fieldglass platform, made some huge strides in linking purchase orders into the Fieldglass platform. Let Fieldglass do what they do great. They're the number one market leader in bringing the invoices back to the network over the common adapter.

In terms of direct and logistics and supply chain, we brought to market, like we mentioned last year, some direct materials supply-chain capability, co-innovating with a number of customers right now. We added subcontracting purchase order (PO) for complex scenarios in the summer and have done some great work in extending the capability to support consumer package and retail supplies.

Interesting strides

We've done some really interesting strides, and again, expanding the spend categories that we can support on there.

And last but not least, Concur. It's number one in travel, and we're excited to have that part of the family. Again, from an SAP perspective, when you look at total spend, there's just an unparalleled capability to manage any spend segment. We're working pretty closely with Concur to ensure we have tied integration and we work at how we can leverage their invoicing capability as a complement to Ariba's.

Gardner: Line-of-business applications is one of the things that's intrigued me here. Hearing your story unfold is this "no middleware, yet expansive integration -- end to end integration across business processes and data."

A resounding message from our customers . . . is that we need seamless, simpler integration between our cloud applications and our current applications.

So in this line-of-business category, explain to me how you can be so inclusive leveraging the technology. How does that work?

Haydon: Let me unpack that a little bit. A resounding message from our customers, particularly since the acquisition, is that we need seamless, simpler integration between our cloud applications and our current applications. Would they be on-premise?

I'll talk about Oracle and other clients in a little bit, but specifically for our SAP ERP systems, we’ve really worked hand in glove with our on-premise business-suite partners to understand how we can move from integrate to activate.

And so what we brought to market pretty significantly with the business suite is the ability for any SAP Business Suite customer to download an add-on that basically gives them an out-of-the-box connectivity to the Ariba Network. We continue to invest in that with S/4HANA upcoming, where we are planning to have native connectivity to the Ariba Network as part of a standard feature of S/4HANA.

For our other customers, the Oracle customers and other major ERP systems out there, we continue to invest in open adapters to enable their procurement and finance processes across the network or with any of our cloud applications.

Gardner: There's something that's always important. We leave it to the end, but we probably shouldn't -- risk management. It seems to me that you're building more inherent risk-management features inside these applications and processes. It's another function of the technology. When you have great network-centric capabilities and a solid single platform to work from, you can start to do this. Tell us a little bit more about that.

Emerging area

Haydon: This is a really exciting and emerging area. More and more leading-practice companies are starting to manage their procurement and their supply chains from a risk basis, the risk, the continuity of supply, security of supply. What happens if x, what happens if y? You eye your supply chain. If there is, heaven forbid, some contamination or whatever traceability issue somewhere in your supply chain, and you're a large company or even a small company, now you're held accountable.

How do we start helping companies understand the risk that exists within their supply chain? We think that the business network is the best way to make sense of the risk that exists in your supply chain. Why?

One, because it's a connected community; and two, because you think about the premise. We already have the transactions, 750 billion plus to spend. We already have a million plus trusted, connected relationships. But that's the first step.

We also think about where we can have differential inputs, third-party inputs, on types of dimensions, and we think it's these risk dimensions or domains of information that matter, whether it's safety, performance, innovation, diversity, environment, or financial risk. It could be any of these domains, whether it's information from Dun and Bradstreet, information from Made In A Free World, which has a global slavery index. Whatever these dimensions of information are, we want to bring them in to our applications in the context of the transaction, in the context of the end-user.

Imagine when you do a sourcing event if you could be notified of some disruption or some type of risk in your supply chain before you finally award that sourcing event or before you finally sign the contract. That provides a differential level of outcome that can only really be delivered through a business network in a community.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

You may also be interested in:

Tags:  Ariba  Ariba LIVE  Ariba Network  BriefingsDirect  business network  Chris Haydon  Dana Gardner  Interarbor Solutions  networked economy 

Share |
PermalinkComments (0)
 

How Globe Testing helps startups make the leap to cloud- and mobile-first development

Posted By Dana L Gardner, Thursday, April 30, 2015

This latest BriefingsDirect mobile development innovation discussion examines how Globe Testing, based in Madrid, helps startups make the leap to cloud-first and mobile-first software development.

We'll explore how Globe Testing pushes the envelope on Agile development and applications development management using HP tools and platforms.
Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more about modern software testing as a service we're joined by Jose Aracil, CEO of Globe Testing, based in the company's Berlin office. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about Globe Testing. Are you strictly a testing organization? Do you do anything else? And how long have you been in existence?

Aracil

Aracil: We're a testing organization, and our services are around the Application Development Management (ADM) portfolio for HP Software. We work with tools such as HP LoadRunner, HP Quality Center, HP Diagnostics, and so on. We've been around for four years now, although most of our employees actually come from either HP Software or, back in the day, from Mercury Interactive. So, you could say that we're the real experts in this arena.

Gardner: Jose, what are the big issues facing software developers today? Obviously, speed has always been an issue and working quality into the process from start to finish has always been important, but is there anything particularly new or pressing about today's market when it comes to software development?

Scalability is key

Aracil: Scalability is a big issue. These days, most of the cloud providers would say that they can easily scale your instances, but for startups there are some hidden costs. If you're not coding properly, if your code is not properly optimized, the app might be able to scale -- but that’s going to have a huge impact on your books.

Therefore, the return on investment (ROI) when you're looking at HP Software is very clear. You work with the toolset. You have proper services, such as Globe Testing. You optimize your applications. And that’s going to make them cheaper to run in the long term.

Reduce post-production issues by 80 percent
Download the HP white paper
Build applications that meet business requirements

There are also things such as response time. Customers are very impatient. The old rule was that websites shouldn't take more than three seconds to load, but these days it's one second. If it's not instant, you just go and look for a different website. So response time is also something that is very worrying for our customers.

Gardner: So it sounds like cloud-first. We're talking about high scale, availability, and performance, but not being able to anticipate what that high scale might be in any given time. Therefore, creating a test environment, where you can make the assumption that cloud performance is going to be required and test against it, becomes all more important.

Aracil: Definitely. You need to look at performance in two ways. The first one is before the app goes into production in your environment. You need to be able to optimize the code there and make sure that your code is working properly and that the performance is up to your standard. Then, you need to run a number of simulations to see how the application is going to scale.

You might not reach the final numbers, and obviously it's very expensive to have those staging environments. You might not want to test with large numbers of users, but at least you need to know how the app behaves whenever you increase the load by 20 percent, 50 percent, and so on.

The second aspect that you need to be looking at is when the app is in production. You can't just go into production and forget about the app.

The second aspect that you need to be looking at is when the app is in production. You can't just go into production and forget about the app. You need to carry on monitoring that app, make sure that you anticipate problems, and know about those problems before your end users call to tell you that your app is not up and running.

For both situations HP Software has different tools. You can count on HP Performance Center and HP Diagnostics when you're in preproduction in your staging environment. Once you go live, you have different toolsets such as AppPulse, for example, which can monitor your application constantly. It's available as software as a service (SaaS). So it's very well-suited for new startups that are coming out every day with very interesting pricing models.

Gardner: You're based in Berlin, and that's a hotbed of startup activity in Europe. Tell us what else is important to startups. I have to imagine that mobile and being ready to produce an application that can run in a variety of mobile environments is important, too.

Mobile is hot

Aracil: Definitely. Mobile is very hot right now in Berlin. Most of the startups we talk to are facing the same issue, which is compatibility. They all want to support every single platform available. We're not only talking about mobile and tablet devices, but we're also talking about the smart TVs and the wide array of systems that now should support the different applications that they're developing.

So being able to test on multiple operating systems and platforms and being able to automate as much as possible is very important for them. They need the tools that are very flexible and that can handle any given protocol. Again, HP Software, with things such as Unified Functional Testing (UFT), can help them.

Mobile Center, which was just released from HP Software, is also very interesting for startups and large enterprise as well, because we're seeing the same need there. Banking, for example, an industry which is usually very stable and very slow paced is also adopting mobile very quickly. Everyone wants to check their bank accounts online using their iPad, iPhone, or Android tablets and phones, and it needs to work on all of those.

Most of the startups we talk to are facing the same issue, which is compatibility. They all want to support every single platform available.

Gardner: Now going to those enterprise customers, they're concerned about mobile of course, but they're also now more-and-more concerned about DevOps and being able to tighten the relationship between their operating environment and their test and development organizations. How do some of these tools and approaches, particularly using testing as a service, come to bear on helping organizations become better at DevOps?

Aracil: DevOps is a very hot word these days. HP has come a long way. They're producing lots of innovation, especially with the latest releases. They not only need to take care of the testers like in the old days with manual testing, automation, and test management. Now, you need to make sure that whatever assets you're developing on pre-production can then be reused when you go in production.

Reduce post-production issues by 80 percent
Download the HP white paper
Build applications that meet business requirements

Just to give you an example, with HP LoadRunner, the same scripts can be run in production to make sure that the system is still up and running. That also tightens the relationship between your Dev team and your Operations team. They work together much more than they used to.

Gardner: Okay, looking increasingly at performance and testing and development in general as a service, how are these organizations, both the startups and the enterprises, adapting to that? A lot of times cloud was attractive early to developers, they could fire up environments, virtualize environments, use them, shut them down, and be flexible. But what about the testing for your organization? Do you rely on the cloud entirely and how do you see that progressing?

Aracil: To give you an example, customers want their applications tested in the same way as real users would access them, which means they are accessing them from the Internet. So it's not valid to test their applications from inside the data center. You need to use the cloud. You need to access them from multiple locations. The old testing strategy isn't valid any more.

For us, Globe Testing as a Service is very important. Right now, we're providing customers with teams that are geographically distributed. They can do things such as test automation remotely, and that can then be sent to the customers so they are tested locally, and things such as performance testing, which is run directly from the cloud in the same way as users will do.

And you can choose multiple locations, even simulating the kind of connections that these users are using. So you can simulate a 3G connection, a Wi-Fi connection, and the like.

Other trends

Gardner: I suppose other trends we're seeing are rapid iterations and microservices. The use of  application programming interfaces (APIs) is increasing. All of these, I think, are conducive to to a cloud testing environment, so that you could be rapid and bring in services. How is that working? How do you see your customers, and maybe you can provide some examples to illustrate this, working toward cloud-first, mobile-first and these more rapid innovations; even microservices?

Aracil: In the old days, most of the testing was done from an end-to-end perspective. You would run a test case that was heavily focused on the front end, and that would run the end-to-end case. These days, for these kinds of customers that you mentioned we're focusing on these services. We need to be able to develop some of the scripts before the end services are up and running, in which case things such as Service Virtualization from HP Software are very useful as well.

For example, one of our customers is Ticketmaster, a large online retailer. They sell tickets for concerts. Whenever there's a big gig happening in town, whenever one of these large bands is showing up, tickets run out extremely quickly.

Their platform goes from an average of hundreds of users a day to all of a sudden thousands of users in a very short period of time. They need to be able to scale very quickly to cope with that load. For that, we need to test from the cloud and we need to test constantly on each one of those little microservices to make sure that everything is going to scale properly. For that, HP LoadRunner is the tool that we chose.

We need to be able to develop some of the scripts before the end services are up and running.

Gardner: Do you have any examples of companies that are doing Application Development Management (ADM), that is to say more of an inclusive complete application lifecycle approach? Are they thinking about this holistically, making it a core competency for them? How does that help them? Is there an economic benefit, in addition to some of these technical benefits, when you adopt a full lifecycle approach to development, test, and deployment?

Aracil: To give you an example of economic benefit, we did a project for a very large startup, where all their systems were cloud-based. We basically used HP LoadRunner and HP Diagnostics to look at the code and try to optimize it in conjunction with their development team. By optimizing that code, they reduced the amount of cloud instances required by one-third, which means a 33 percent savings on their monthly bill. That’s straight savings, very important.

Another example is large telecommunication company in Switzerland. Sometimes we focus not only on the benefits for IT, but also the people that they are actually using those services. For example those guys that go to their retail shops to get a new iPhone or to activate a new contract.

Reduce post-production issues by 80 percent
Download the HP white paper
Build applications that meet business requirements

If the systems are not fast enough, sometimes you will see queues of people, which turns into lower sales. If you optimize those systems, that means that the agents are going to be able to process contracts much quicker. This specific example will reduce to one-fifth of the time by using Performance Center. That means that the following Christmas, queues literally disappear from all those retail shops. That turns into higher sales for the customer.

Gardner: Jose, what about the future? What is of interest to you as a HP partner? You mentioned the mobile test products and services. Is there anything else particularly of interest, or anything on the big data side that you can bring to bear on development or help developers make better use of analytics?

Big data

Aracil: There are a number of innovations that are coming out this year that  are extremely interesting to us. These are things such as HP AppPulse Mobile, StormRunner, both are new tools and they are very innovative.

When it comes to big data, I'm very excited to see the next releases in the ALM suite from HP, because I think they will make a very big use of big data, and obviously they will try to get all the information, all the data that testers are entering into the application from requirements. The predictive test and the traceability will be much better handled by this kind of big data system. I think we will need to wait a few more months, but there are some new innovations coming out in that area as well.
Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  Agile  BriefingsDirect  cloud computing  Dana Gardner  Globe Testing  HP  HPDiscover  Interarbor Solutions  Jose Aracil  mobile app development  mobile computing 

Share |
PermalinkComments (0)
 
Page 1 of 59
1  |  2  |  3  |  4  |  5  |  6  >   >>   >| 
Page Title
Association Management Software Powered by YourMembership.com®  ::  Legal